Nov 8 00:17:18.963007 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:17:18.963034 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:17:18.963046 kernel: BIOS-provided physical RAM map: Nov 8 00:17:18.963052 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 8 00:17:18.963058 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 8 00:17:18.963064 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 8 00:17:18.963072 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 8 00:17:18.963078 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 8 00:17:18.963085 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Nov 8 00:17:18.963091 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Nov 8 00:17:18.963101 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Nov 8 00:17:18.963107 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Nov 8 00:17:18.963116 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Nov 8 00:17:18.963122 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Nov 8 00:17:18.963132 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Nov 8 00:17:18.963139 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 8 00:17:18.963150 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Nov 8 00:17:18.963159 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Nov 8 00:17:18.963168 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 8 00:17:18.963178 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 8 00:17:18.963184 kernel: NX (Execute Disable) protection: active Nov 8 00:17:18.963191 kernel: APIC: Static calls initialized Nov 8 00:17:18.963198 kernel: efi: EFI v2.7 by EDK II Nov 8 00:17:18.963205 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Nov 8 00:17:18.963212 kernel: SMBIOS 2.8 present. Nov 8 00:17:18.963218 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Nov 8 00:17:18.963225 kernel: Hypervisor detected: KVM Nov 8 00:17:18.963235 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:17:18.963242 kernel: kvm-clock: using sched offset of 5199779301 cycles Nov 8 00:17:18.963249 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:17:18.963256 kernel: tsc: Detected 2794.750 MHz processor Nov 8 00:17:18.963263 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:17:18.963271 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:17:18.963278 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Nov 8 00:17:18.963285 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 8 00:17:18.963292 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:17:18.963301 kernel: Using GB pages for direct mapping Nov 8 00:17:18.963308 kernel: Secure boot disabled Nov 8 00:17:18.963315 kernel: ACPI: Early table checksum verification disabled Nov 8 00:17:18.963322 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 8 00:17:18.963333 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 8 00:17:18.963341 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:17:18.963348 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:17:18.963358 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 8 00:17:18.963365 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:17:18.963376 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:17:18.963383 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:17:18.963390 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:17:18.963398 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 8 00:17:18.963405 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 8 00:17:18.963415 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Nov 8 00:17:18.963423 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 8 00:17:18.963430 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 8 00:17:18.963437 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 8 00:17:18.963444 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 8 00:17:18.963451 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 8 00:17:18.963459 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 8 00:17:18.963466 kernel: No NUMA configuration found Nov 8 00:17:18.963476 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Nov 8 00:17:18.963486 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Nov 8 00:17:18.963493 kernel: Zone ranges: Nov 8 00:17:18.963510 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:17:18.963517 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Nov 8 00:17:18.963524 kernel: Normal empty Nov 8 00:17:18.963532 kernel: Movable zone start for each node Nov 8 00:17:18.963539 kernel: Early memory node ranges Nov 8 00:17:18.963546 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 8 00:17:18.963553 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 8 00:17:18.963560 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 8 00:17:18.963571 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Nov 8 00:17:18.963578 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Nov 8 00:17:18.963585 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Nov 8 00:17:18.963595 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Nov 8 00:17:18.963602 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:17:18.963610 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 8 00:17:18.963617 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 8 00:17:18.963624 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:17:18.963631 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Nov 8 00:17:18.963641 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 8 00:17:18.963648 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Nov 8 00:17:18.963656 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:17:18.963663 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:17:18.963670 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:17:18.963678 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:17:18.963685 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:17:18.963692 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:17:18.963699 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:17:18.963710 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:17:18.963717 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:17:18.963724 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:17:18.963731 kernel: TSC deadline timer available Nov 8 00:17:18.963739 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 8 00:17:18.963746 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:17:18.963753 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 8 00:17:18.963760 kernel: kvm-guest: setup PV sched yield Nov 8 00:17:18.963768 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 8 00:17:18.963775 kernel: Booting paravirtualized kernel on KVM Nov 8 00:17:18.963785 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:17:18.963793 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 8 00:17:18.963800 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Nov 8 00:17:18.963807 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Nov 8 00:17:18.963815 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 8 00:17:18.963822 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:17:18.963829 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:17:18.963837 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:17:18.963861 kernel: random: crng init done Nov 8 00:17:18.963869 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:17:18.963876 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:17:18.963884 kernel: Fallback order for Node 0: 0 Nov 8 00:17:18.963891 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Nov 8 00:17:18.963898 kernel: Policy zone: DMA32 Nov 8 00:17:18.963905 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:17:18.963913 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 166140K reserved, 0K cma-reserved) Nov 8 00:17:18.963920 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 8 00:17:18.963931 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:17:18.963938 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:17:18.963945 kernel: Dynamic Preempt: voluntary Nov 8 00:17:18.963953 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:17:18.963973 kernel: rcu: RCU event tracing is enabled. Nov 8 00:17:18.963983 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 8 00:17:18.963991 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:17:18.963999 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:17:18.964007 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:17:18.964014 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:17:18.964022 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 8 00:17:18.964032 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 8 00:17:18.964040 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:17:18.964048 kernel: Console: colour dummy device 80x25 Nov 8 00:17:18.964055 kernel: printk: console [ttyS0] enabled Nov 8 00:17:18.964065 kernel: ACPI: Core revision 20230628 Nov 8 00:17:18.964073 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:17:18.964084 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:17:18.964091 kernel: x2apic enabled Nov 8 00:17:18.964099 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:17:18.964107 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 8 00:17:18.964115 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 8 00:17:18.964122 kernel: kvm-guest: setup PV IPIs Nov 8 00:17:18.964130 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:17:18.964138 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 8 00:17:18.964145 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Nov 8 00:17:18.964156 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 8 00:17:18.964163 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 8 00:17:18.964171 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 8 00:17:18.964178 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:17:18.964186 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:17:18.964194 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:17:18.964201 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 8 00:17:18.964209 kernel: active return thunk: retbleed_return_thunk Nov 8 00:17:18.964219 kernel: RETBleed: Mitigation: untrained return thunk Nov 8 00:17:18.964227 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:17:18.964235 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:17:18.964243 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 8 00:17:18.964253 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 8 00:17:18.964261 kernel: active return thunk: srso_return_thunk Nov 8 00:17:18.964269 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 8 00:17:18.964276 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:17:18.964284 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:17:18.964294 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:17:18.964302 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:17:18.964310 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 8 00:17:18.964317 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:17:18.964325 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:17:18.964333 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:17:18.964340 kernel: landlock: Up and running. Nov 8 00:17:18.964348 kernel: SELinux: Initializing. Nov 8 00:17:18.964355 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:17:18.964366 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:17:18.964373 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 8 00:17:18.964381 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:17:18.964389 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:17:18.964397 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:17:18.964404 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 8 00:17:18.964412 kernel: ... version: 0 Nov 8 00:17:18.964419 kernel: ... bit width: 48 Nov 8 00:17:18.964427 kernel: ... generic registers: 6 Nov 8 00:17:18.964437 kernel: ... value mask: 0000ffffffffffff Nov 8 00:17:18.964445 kernel: ... max period: 00007fffffffffff Nov 8 00:17:18.964452 kernel: ... fixed-purpose events: 0 Nov 8 00:17:18.964462 kernel: ... event mask: 000000000000003f Nov 8 00:17:18.964473 kernel: signal: max sigframe size: 1776 Nov 8 00:17:18.964480 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:17:18.964488 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:17:18.964496 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:17:18.964512 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:17:18.964523 kernel: .... node #0, CPUs: #1 #2 #3 Nov 8 00:17:18.964530 kernel: smp: Brought up 1 node, 4 CPUs Nov 8 00:17:18.964538 kernel: smpboot: Max logical packages: 1 Nov 8 00:17:18.964545 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Nov 8 00:17:18.964553 kernel: devtmpfs: initialized Nov 8 00:17:18.964561 kernel: x86/mm: Memory block size: 128MB Nov 8 00:17:18.964568 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 8 00:17:18.964576 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 8 00:17:18.964585 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Nov 8 00:17:18.964595 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 8 00:17:18.964603 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 8 00:17:18.964611 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:17:18.964619 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 8 00:17:18.964626 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:17:18.964634 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:17:18.964641 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:17:18.964649 kernel: audit: type=2000 audit(1762561037.565:1): state=initialized audit_enabled=0 res=1 Nov 8 00:17:18.964657 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:17:18.964667 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:17:18.964674 kernel: cpuidle: using governor menu Nov 8 00:17:18.964682 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:17:18.964690 kernel: dca service started, version 1.12.1 Nov 8 00:17:18.964697 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 8 00:17:18.964705 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 8 00:17:18.964713 kernel: PCI: Using configuration type 1 for base access Nov 8 00:17:18.964721 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:17:18.964728 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:17:18.964739 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:17:18.964747 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:17:18.964754 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:17:18.964762 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:17:18.964770 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:17:18.964777 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:17:18.964785 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:17:18.964793 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:17:18.964800 kernel: ACPI: Interpreter enabled Nov 8 00:17:18.964811 kernel: ACPI: PM: (supports S0 S3 S5) Nov 8 00:17:18.964819 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:17:18.964826 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:17:18.964834 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:17:18.964842 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 8 00:17:18.964861 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:17:18.965090 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:17:18.965229 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 8 00:17:18.965362 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 8 00:17:18.965372 kernel: PCI host bridge to bus 0000:00 Nov 8 00:17:18.965552 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:17:18.965673 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:17:18.965790 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:17:18.965928 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 8 00:17:18.966052 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:17:18.966167 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Nov 8 00:17:18.966284 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:17:18.966445 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 8 00:17:18.966610 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 8 00:17:18.966740 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Nov 8 00:17:18.966895 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Nov 8 00:17:18.967031 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 8 00:17:18.967157 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Nov 8 00:17:18.967292 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:17:18.967441 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 8 00:17:18.967587 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Nov 8 00:17:18.967724 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Nov 8 00:17:18.967869 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Nov 8 00:17:18.968026 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 8 00:17:18.968155 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Nov 8 00:17:18.968282 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Nov 8 00:17:18.968409 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Nov 8 00:17:18.968576 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:17:18.968721 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Nov 8 00:17:18.968939 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Nov 8 00:17:18.969309 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Nov 8 00:17:18.969439 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Nov 8 00:17:18.969590 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 8 00:17:18.969718 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 8 00:17:18.969877 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 8 00:17:18.970019 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Nov 8 00:17:18.970240 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Nov 8 00:17:18.970413 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 8 00:17:18.970582 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Nov 8 00:17:18.970594 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:17:18.970602 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:17:18.970610 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:17:18.970618 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:17:18.970626 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 8 00:17:18.970639 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 8 00:17:18.970646 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 8 00:17:18.970654 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 8 00:17:18.970662 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 8 00:17:18.970670 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 8 00:17:18.970677 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 8 00:17:18.970685 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 8 00:17:18.970693 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 8 00:17:18.970701 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 8 00:17:18.970711 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 8 00:17:18.970719 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 8 00:17:18.970726 kernel: iommu: Default domain type: Translated Nov 8 00:17:18.970734 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:17:18.970742 kernel: efivars: Registered efivars operations Nov 8 00:17:18.970749 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:17:18.970757 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:17:18.970765 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 8 00:17:18.970773 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Nov 8 00:17:18.970783 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Nov 8 00:17:18.970791 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Nov 8 00:17:18.970949 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 8 00:17:18.971077 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 8 00:17:18.971209 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:17:18.971219 kernel: vgaarb: loaded Nov 8 00:17:18.971227 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:17:18.971235 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:17:18.971243 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:17:18.971255 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:17:18.971263 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:17:18.971271 kernel: pnp: PnP ACPI init Nov 8 00:17:18.971438 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 8 00:17:18.971455 kernel: pnp: PnP ACPI: found 6 devices Nov 8 00:17:18.971466 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:17:18.971474 kernel: NET: Registered PF_INET protocol family Nov 8 00:17:18.971481 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:17:18.971493 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:17:18.971511 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:17:18.971519 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:17:18.971527 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:17:18.971534 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:17:18.971542 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:17:18.971550 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:17:18.971558 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:17:18.971566 kernel: NET: Registered PF_XDP protocol family Nov 8 00:17:18.971764 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Nov 8 00:17:18.971970 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Nov 8 00:17:18.972093 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:17:18.972208 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:17:18.972322 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:17:18.972435 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 8 00:17:18.972558 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 8 00:17:18.972756 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Nov 8 00:17:18.972774 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:17:18.972782 kernel: Initialise system trusted keyrings Nov 8 00:17:18.972790 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:17:18.972798 kernel: Key type asymmetric registered Nov 8 00:17:18.972806 kernel: Asymmetric key parser 'x509' registered Nov 8 00:17:18.972813 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:17:18.972821 kernel: io scheduler mq-deadline registered Nov 8 00:17:18.972829 kernel: io scheduler kyber registered Nov 8 00:17:18.972839 kernel: io scheduler bfq registered Nov 8 00:17:18.972860 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:17:18.972869 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 8 00:17:18.972877 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 8 00:17:18.972884 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 8 00:17:18.972892 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:17:18.972900 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:17:18.972908 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:17:18.972915 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:17:18.972926 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:17:18.973071 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 8 00:17:18.973083 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:17:18.973201 kernel: rtc_cmos 00:04: registered as rtc0 Nov 8 00:17:18.973319 kernel: rtc_cmos 00:04: setting system clock to 2025-11-08T00:17:18 UTC (1762561038) Nov 8 00:17:18.973435 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 8 00:17:18.973445 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 8 00:17:18.973453 kernel: efifb: probing for efifb Nov 8 00:17:18.973465 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Nov 8 00:17:18.973473 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Nov 8 00:17:18.973480 kernel: efifb: scrolling: redraw Nov 8 00:17:18.973488 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Nov 8 00:17:18.973496 kernel: Console: switching to colour frame buffer device 100x37 Nov 8 00:17:18.973516 kernel: fb0: EFI VGA frame buffer device Nov 8 00:17:18.973544 kernel: pstore: Using crash dump compression: deflate Nov 8 00:17:18.973555 kernel: pstore: Registered efi_pstore as persistent store backend Nov 8 00:17:18.973563 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:17:18.973574 kernel: Segment Routing with IPv6 Nov 8 00:17:18.973582 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:17:18.973590 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:17:18.973598 kernel: Key type dns_resolver registered Nov 8 00:17:18.973606 kernel: IPI shorthand broadcast: enabled Nov 8 00:17:18.973616 kernel: sched_clock: Marking stable (1013003350, 205117876)->(1281218093, -63096867) Nov 8 00:17:18.973627 kernel: registered taskstats version 1 Nov 8 00:17:18.973637 kernel: Loading compiled-in X.509 certificates Nov 8 00:17:18.973647 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:17:18.973659 kernel: Key type .fscrypt registered Nov 8 00:17:18.973666 kernel: Key type fscrypt-provisioning registered Nov 8 00:17:18.973674 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:17:18.973682 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:17:18.973690 kernel: ima: No architecture policies found Nov 8 00:17:18.973698 kernel: clk: Disabling unused clocks Nov 8 00:17:18.973706 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:17:18.973714 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:17:18.973722 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:17:18.973730 kernel: Run /init as init process Nov 8 00:17:18.973741 kernel: with arguments: Nov 8 00:17:18.973749 kernel: /init Nov 8 00:17:18.973759 kernel: with environment: Nov 8 00:17:18.973767 kernel: HOME=/ Nov 8 00:17:18.973775 kernel: TERM=linux Nov 8 00:17:18.973785 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:17:18.973795 systemd[1]: Detected virtualization kvm. Nov 8 00:17:18.973807 systemd[1]: Detected architecture x86-64. Nov 8 00:17:18.973815 systemd[1]: Running in initrd. Nov 8 00:17:18.973826 systemd[1]: No hostname configured, using default hostname. Nov 8 00:17:18.973834 systemd[1]: Hostname set to . Nov 8 00:17:18.973912 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:17:18.973925 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:17:18.973934 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:17:18.973943 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:17:18.973952 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:17:18.973961 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:17:18.973969 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:17:18.973978 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:17:18.973991 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:17:18.974000 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:17:18.974009 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:17:18.974017 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:17:18.974026 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:17:18.974034 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:17:18.974043 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:17:18.974052 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:17:18.974063 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:17:18.974071 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:17:18.974080 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:17:18.974089 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:17:18.974097 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:17:18.974106 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:17:18.974115 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:17:18.974123 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:17:18.974132 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:17:18.974145 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:17:18.974155 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:17:18.974165 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:17:18.974173 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:17:18.974182 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:17:18.974191 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:17:18.974199 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:17:18.974208 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:17:18.974239 systemd-journald[192]: Collecting audit messages is disabled. Nov 8 00:17:18.974260 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:17:18.974271 systemd-journald[192]: Journal started Nov 8 00:17:18.974290 systemd-journald[192]: Runtime Journal (/run/log/journal/2dfce182812d4ead8bc0367ee23ddeeb) is 6.0M, max 48.3M, 42.2M free. Nov 8 00:17:18.970515 systemd-modules-load[193]: Inserted module 'overlay' Nov 8 00:17:18.982876 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:17:18.982900 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:17:18.985087 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:17:18.998862 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:17:19.001765 systemd-modules-load[193]: Inserted module 'br_netfilter' Nov 8 00:17:19.003319 kernel: Bridge firewalling registered Nov 8 00:17:19.012166 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:17:19.014286 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:17:19.014982 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:17:19.015771 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:17:19.027520 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:17:19.031221 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:17:19.033973 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:17:19.037677 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:17:19.049129 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:17:19.052386 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:17:19.058375 dracut-cmdline[221]: dracut-dracut-053 Nov 8 00:17:19.061415 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:17:19.070625 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:17:19.084199 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:17:19.119125 systemd-resolved[248]: Positive Trust Anchors: Nov 8 00:17:19.119140 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:17:19.119172 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:17:19.135942 systemd-resolved[248]: Defaulting to hostname 'linux'. Nov 8 00:17:19.139130 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:17:19.140302 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:17:19.167882 kernel: SCSI subsystem initialized Nov 8 00:17:19.177876 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:17:19.189884 kernel: iscsi: registered transport (tcp) Nov 8 00:17:19.212336 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:17:19.212366 kernel: QLogic iSCSI HBA Driver Nov 8 00:17:19.274801 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:17:19.292046 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:17:19.320129 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:17:19.320172 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:17:19.321760 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:17:19.368885 kernel: raid6: avx2x4 gen() 24033 MB/s Nov 8 00:17:19.385867 kernel: raid6: avx2x2 gen() 30824 MB/s Nov 8 00:17:19.403701 kernel: raid6: avx2x1 gen() 25914 MB/s Nov 8 00:17:19.403724 kernel: raid6: using algorithm avx2x2 gen() 30824 MB/s Nov 8 00:17:19.421657 kernel: raid6: .... xor() 19805 MB/s, rmw enabled Nov 8 00:17:19.421680 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:17:19.442880 kernel: xor: automatically using best checksumming function avx Nov 8 00:17:19.610907 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:17:19.627260 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:17:19.642141 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:17:19.654923 systemd-udevd[415]: Using default interface naming scheme 'v255'. Nov 8 00:17:19.659659 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:17:19.671013 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:17:19.685432 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Nov 8 00:17:19.720079 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:17:19.731053 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:17:19.798378 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:17:19.807024 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:17:19.822575 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:17:19.827757 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:17:19.832666 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:17:19.837429 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:17:19.844896 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 8 00:17:19.848329 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 8 00:17:19.850048 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:17:19.860929 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:17:19.860965 kernel: GPT:9289727 != 19775487 Nov 8 00:17:19.860977 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:17:19.860987 kernel: GPT:9289727 != 19775487 Nov 8 00:17:19.860997 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:17:19.861007 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:17:19.863841 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:17:19.869868 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:17:19.872904 kernel: libata version 3.00 loaded. Nov 8 00:17:19.879870 kernel: ahci 0000:00:1f.2: version 3.0 Nov 8 00:17:19.882868 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:17:19.888942 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 8 00:17:19.888967 kernel: AES CTR mode by8 optimization enabled Nov 8 00:17:19.888978 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 8 00:17:19.889151 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 8 00:17:19.891212 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:17:19.900024 kernel: scsi host0: ahci Nov 8 00:17:19.891290 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:17:19.905705 kernel: scsi host1: ahci Nov 8 00:17:19.893669 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:17:19.895611 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:17:19.912029 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (463) Nov 8 00:17:19.912046 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (465) Nov 8 00:17:19.895670 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:17:19.917968 kernel: scsi host2: ahci Nov 8 00:17:19.918244 kernel: scsi host3: ahci Nov 8 00:17:19.918597 kernel: scsi host4: ahci Nov 8 00:17:19.906784 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:17:19.929952 kernel: scsi host5: ahci Nov 8 00:17:19.930150 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Nov 8 00:17:19.930162 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Nov 8 00:17:19.930173 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Nov 8 00:17:19.930183 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Nov 8 00:17:19.930193 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Nov 8 00:17:19.930203 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Nov 8 00:17:19.937004 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:17:19.957545 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 8 00:17:19.962120 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:17:19.979236 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 8 00:17:19.988425 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:17:19.996360 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 8 00:17:20.000778 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 8 00:17:20.022000 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:17:20.025826 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:17:20.025904 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:17:20.029878 disk-uuid[554]: Primary Header is updated. Nov 8 00:17:20.029878 disk-uuid[554]: Secondary Entries is updated. Nov 8 00:17:20.029878 disk-uuid[554]: Secondary Header is updated. Nov 8 00:17:20.035779 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:17:20.035146 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:17:20.037867 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:17:20.044003 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:17:20.077513 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:17:20.086357 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:17:20.109082 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:17:20.243887 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:17:20.243969 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 8 00:17:20.245893 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 8 00:17:20.246894 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:17:20.247882 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:17:20.251301 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 8 00:17:20.251316 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 8 00:17:20.251326 kernel: ata3.00: applying bridge limits Nov 8 00:17:20.253049 kernel: ata3.00: configured for UDMA/100 Nov 8 00:17:20.253873 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 8 00:17:20.301485 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 8 00:17:20.301778 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:17:20.313887 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:17:21.039763 disk-uuid[555]: The operation has completed successfully. Nov 8 00:17:21.041472 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:17:21.075008 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:17:21.075133 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:17:21.094995 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:17:21.099727 sh[597]: Success Nov 8 00:17:21.112892 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 8 00:17:21.145400 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:17:21.156571 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:17:21.159380 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:17:21.172394 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:17:21.172425 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:17:21.172436 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:17:21.175391 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:17:21.175413 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:17:21.180652 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:17:21.182633 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:17:21.197015 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:17:21.199332 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:17:21.210445 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:17:21.210487 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:17:21.210498 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:17:21.214869 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:17:21.224027 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:17:21.227869 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:17:21.237163 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:17:21.244077 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:17:21.316142 ignition[697]: Ignition 2.19.0 Nov 8 00:17:21.316154 ignition[697]: Stage: fetch-offline Nov 8 00:17:21.316192 ignition[697]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:17:21.316202 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:17:21.316297 ignition[697]: parsed url from cmdline: "" Nov 8 00:17:21.316301 ignition[697]: no config URL provided Nov 8 00:17:21.316307 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:17:21.323646 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:17:21.316316 ignition[697]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:17:21.332284 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:17:21.316343 ignition[697]: op(1): [started] loading QEMU firmware config module Nov 8 00:17:21.316348 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 8 00:17:21.325813 ignition[697]: op(1): [finished] loading QEMU firmware config module Nov 8 00:17:21.367128 systemd-networkd[785]: lo: Link UP Nov 8 00:17:21.367138 systemd-networkd[785]: lo: Gained carrier Nov 8 00:17:21.368795 systemd-networkd[785]: Enumeration completed Nov 8 00:17:21.368933 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:17:21.370364 systemd[1]: Reached target network.target - Network. Nov 8 00:17:21.370669 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:17:21.370674 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:17:21.371451 systemd-networkd[785]: eth0: Link UP Nov 8 00:17:21.371456 systemd-networkd[785]: eth0: Gained carrier Nov 8 00:17:21.371463 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:17:21.401914 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 8 00:17:21.436999 ignition[697]: parsing config with SHA512: 75a1b10348507034c5b0af30771e7b5c702304b7327ba7bf4d69993b313f013eade2dcd83c1fad06ce4e832c4aeaa994548858033ce9fa0c4ff385ace13d8352 Nov 8 00:17:21.441980 unknown[697]: fetched base config from "system" Nov 8 00:17:21.441992 unknown[697]: fetched user config from "qemu" Nov 8 00:17:21.443567 ignition[697]: fetch-offline: fetch-offline passed Nov 8 00:17:21.443835 ignition[697]: Ignition finished successfully Nov 8 00:17:21.448342 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:17:21.449613 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 8 00:17:21.461051 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:17:21.501012 ignition[789]: Ignition 2.19.0 Nov 8 00:17:21.501026 ignition[789]: Stage: kargs Nov 8 00:17:21.501238 ignition[789]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:17:21.501250 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:17:21.506956 ignition[789]: kargs: kargs passed Nov 8 00:17:21.507014 ignition[789]: Ignition finished successfully Nov 8 00:17:21.512270 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:17:21.519032 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:17:21.538474 ignition[796]: Ignition 2.19.0 Nov 8 00:17:21.538486 ignition[796]: Stage: disks Nov 8 00:17:21.538683 ignition[796]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:17:21.538698 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:17:21.542204 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:17:21.540262 ignition[796]: disks: disks passed Nov 8 00:17:21.545094 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:17:21.540333 ignition[796]: Ignition finished successfully Nov 8 00:17:21.548044 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:17:21.551724 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:17:21.553478 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:17:21.556268 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:17:21.566048 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:17:21.579501 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:17:21.586008 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:17:21.605934 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:17:21.698878 kernel: EXT4-fs (vda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:17:21.699348 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:17:21.702585 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:17:21.714926 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:17:21.719026 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:17:21.723960 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Nov 8 00:17:21.724190 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:17:21.732029 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:17:21.732055 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:17:21.732080 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:17:21.724251 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:17:21.738691 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:17:21.732000 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:17:21.742304 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:17:21.745379 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:17:21.765017 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:17:21.799712 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:17:21.805552 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:17:21.811335 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:17:21.817093 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:17:21.901889 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:17:21.916009 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:17:21.920052 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:17:21.925878 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:17:21.945550 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:17:22.005688 ignition[931]: INFO : Ignition 2.19.0 Nov 8 00:17:22.005688 ignition[931]: INFO : Stage: mount Nov 8 00:17:22.008283 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:17:22.008283 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:17:22.008283 ignition[931]: INFO : mount: mount passed Nov 8 00:17:22.008283 ignition[931]: INFO : Ignition finished successfully Nov 8 00:17:22.015751 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:17:22.030973 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:17:22.170470 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:17:22.184006 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:17:22.192874 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (945) Nov 8 00:17:22.192902 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:17:22.192914 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:17:22.194266 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:17:22.197874 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:17:22.199378 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:17:22.218922 ignition[962]: INFO : Ignition 2.19.0 Nov 8 00:17:22.218922 ignition[962]: INFO : Stage: files Nov 8 00:17:22.221614 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:17:22.221614 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:17:22.221614 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:17:22.227795 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:17:22.227795 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:17:22.234051 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:17:22.236303 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:17:22.238811 unknown[962]: wrote ssh authorized keys file for user: core Nov 8 00:17:22.240434 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:17:22.243733 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:17:22.246759 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:17:22.246759 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:17:22.246759 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 00:17:22.297280 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 8 00:17:22.398909 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:17:22.402743 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:17:22.402743 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:17:22.402743 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:17:22.402743 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:17:22.402743 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:17:22.402743 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:17:22.402743 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:17:22.402743 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:17:22.402743 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:17:22.402743 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:17:22.402743 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:17:22.402743 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:17:22.402743 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:17:22.402743 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 00:17:22.682862 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 8 00:17:22.861033 systemd-networkd[785]: eth0: Gained IPv6LL Nov 8 00:17:23.306603 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:17:23.306603 ignition[962]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 8 00:17:23.312764 ignition[962]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:17:23.312764 ignition[962]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:17:23.312764 ignition[962]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 8 00:17:23.312764 ignition[962]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 8 00:17:23.312764 ignition[962]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:17:23.312764 ignition[962]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:17:23.312764 ignition[962]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 8 00:17:23.312764 ignition[962]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Nov 8 00:17:23.312764 ignition[962]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:17:23.312764 ignition[962]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:17:23.312764 ignition[962]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Nov 8 00:17:23.312764 ignition[962]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Nov 8 00:17:23.361061 ignition[962]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:17:23.365548 ignition[962]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:17:23.368135 ignition[962]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Nov 8 00:17:23.368135 ignition[962]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:17:23.368135 ignition[962]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:17:23.368135 ignition[962]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:17:23.368135 ignition[962]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:17:23.368135 ignition[962]: INFO : files: files passed Nov 8 00:17:23.368135 ignition[962]: INFO : Ignition finished successfully Nov 8 00:17:23.384898 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:17:23.394116 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:17:23.398577 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:17:23.402704 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:17:23.404254 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:17:23.410518 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Nov 8 00:17:23.415281 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:17:23.415281 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:17:23.420320 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:17:23.424298 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:17:23.428449 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:17:23.447007 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:17:23.475596 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:17:23.475730 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:17:23.479297 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:17:23.480290 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:17:23.484782 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:17:23.489625 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:17:23.510962 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:17:23.520090 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:17:23.528976 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:17:23.532814 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:17:23.536631 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:17:23.539581 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:17:23.541157 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:17:23.545228 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:17:23.548570 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:17:23.551516 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:17:23.555234 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:17:23.558984 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:17:23.562602 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:17:23.565951 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:17:23.569963 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:17:23.573306 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:17:23.576581 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:17:23.579207 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:17:23.580786 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:17:23.584401 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:17:23.587912 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:17:23.591772 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:17:23.593314 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:17:23.597546 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:17:23.599118 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:17:23.602685 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:17:23.604402 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:17:23.608203 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:17:23.611059 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:17:23.614894 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:17:23.619328 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:17:23.622270 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:17:23.625277 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:17:23.626650 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:17:23.629984 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:17:23.631455 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:17:23.634791 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:17:23.636665 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:17:23.640761 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:17:23.642293 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:17:23.660988 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:17:23.664045 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:17:23.664169 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:17:23.670770 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:17:23.671443 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:17:23.671572 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:17:23.678473 ignition[1016]: INFO : Ignition 2.19.0 Nov 8 00:17:23.678473 ignition[1016]: INFO : Stage: umount Nov 8 00:17:23.678473 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:17:23.678473 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:17:23.678473 ignition[1016]: INFO : umount: umount passed Nov 8 00:17:23.678473 ignition[1016]: INFO : Ignition finished successfully Nov 8 00:17:23.672363 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:17:23.672542 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:17:23.682330 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:17:23.682457 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:17:23.686785 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:17:23.686949 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:17:23.690279 systemd[1]: Stopped target network.target - Network. Nov 8 00:17:23.691924 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:17:23.691983 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:17:23.694609 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:17:23.694773 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:17:23.697657 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:17:23.697711 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:17:23.701743 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:17:23.701796 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:17:23.702802 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:17:23.707951 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:17:23.714898 systemd-networkd[785]: eth0: DHCPv6 lease lost Nov 8 00:17:23.719153 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:17:23.719441 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:17:23.720934 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:17:23.721162 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:17:23.725598 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:17:23.725694 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:17:23.736024 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:17:23.736642 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:17:23.736709 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:17:23.740534 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:17:23.740588 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:17:23.741310 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:17:23.741356 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:17:23.749698 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:17:23.749749 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:17:23.750707 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:17:23.770481 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:17:23.770619 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:17:23.776739 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:17:23.776940 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:17:23.777754 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:17:23.777812 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:17:23.785011 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:17:23.785056 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:17:23.785831 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:17:23.785895 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:17:23.792921 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:17:23.792972 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:17:23.794416 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:17:23.794470 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:17:23.814053 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:17:23.814691 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:17:23.814749 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:17:23.818468 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:17:23.818522 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:17:23.822190 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:17:23.822309 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:17:23.856221 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:17:24.040645 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:17:24.040873 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:17:24.042567 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:17:24.048121 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:17:24.048279 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:17:24.058146 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:17:24.067459 systemd[1]: Switching root. Nov 8 00:17:24.110876 systemd-journald[192]: Journal stopped Nov 8 00:17:25.432509 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Nov 8 00:17:25.432577 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:17:25.432600 kernel: SELinux: policy capability open_perms=1 Nov 8 00:17:25.432616 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:17:25.432628 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:17:25.432639 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:17:25.432657 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:17:25.432669 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:17:25.432680 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:17:25.432692 kernel: audit: type=1403 audit(1762561044.565:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:17:25.432704 systemd[1]: Successfully loaded SELinux policy in 45.250ms. Nov 8 00:17:25.432734 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.763ms. Nov 8 00:17:25.432747 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:17:25.432760 systemd[1]: Detected virtualization kvm. Nov 8 00:17:25.432772 systemd[1]: Detected architecture x86-64. Nov 8 00:17:25.432784 systemd[1]: Detected first boot. Nov 8 00:17:25.432796 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:17:25.432808 zram_generator::config[1083]: No configuration found. Nov 8 00:17:25.432823 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:17:25.432838 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:17:25.432863 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 8 00:17:25.432877 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:17:25.432889 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:17:25.432901 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:17:25.432913 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:17:25.432926 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:17:25.432938 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:17:25.432951 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:17:25.432967 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:17:25.432979 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:17:25.432992 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:17:25.433004 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:17:25.433017 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:17:25.433030 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:17:25.433042 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:17:25.433054 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:17:25.433066 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:17:25.433081 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:17:25.433094 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:17:25.433106 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:17:25.433118 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:17:25.433130 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:17:25.433142 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:17:25.433154 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:17:25.433166 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:17:25.433181 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:17:25.433193 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:17:25.433205 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:17:25.433218 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:17:25.433230 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:17:25.433242 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:17:25.433254 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:17:25.433266 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:17:25.433279 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:17:25.433294 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:17:25.433306 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:17:25.433321 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:17:25.433335 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:17:25.433347 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:17:25.433367 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:17:25.433379 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:17:25.433391 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:17:25.433406 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:17:25.433418 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:17:25.433430 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:17:25.433443 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:17:25.433460 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:17:25.433472 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 8 00:17:25.433485 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 8 00:17:25.433498 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:17:25.433513 kernel: loop: module loaded Nov 8 00:17:25.433524 kernel: fuse: init (API version 7.39) Nov 8 00:17:25.433536 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:17:25.433548 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:17:25.433560 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:17:25.433572 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:17:25.433585 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:17:25.433615 systemd-journald[1157]: Collecting audit messages is disabled. Nov 8 00:17:25.433641 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:17:25.433653 systemd-journald[1157]: Journal started Nov 8 00:17:25.433675 systemd-journald[1157]: Runtime Journal (/run/log/journal/2dfce182812d4ead8bc0367ee23ddeeb) is 6.0M, max 48.3M, 42.2M free. Nov 8 00:17:25.438222 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:17:25.440273 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:17:25.442158 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:17:25.443899 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:17:25.445926 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:17:25.447872 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:17:25.449827 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:17:25.452420 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:17:25.452650 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:17:25.454918 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:17:25.455157 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:17:25.457393 kernel: ACPI: bus type drm_connector registered Nov 8 00:17:25.457761 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:17:25.458003 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:17:25.460328 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:17:25.460575 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:17:25.462711 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:17:25.462977 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:17:25.465045 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:17:25.465278 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:17:25.467413 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:17:25.469573 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:17:25.471934 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:17:25.489300 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:17:25.508986 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:17:25.525251 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:17:25.527094 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:17:25.557997 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:17:25.561043 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:17:25.562900 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:17:25.564187 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:17:25.565944 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:17:25.567281 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:17:25.571757 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:17:25.573641 systemd-journald[1157]: Time spent on flushing to /var/log/journal/2dfce182812d4ead8bc0367ee23ddeeb is 69.136ms for 981 entries. Nov 8 00:17:25.573641 systemd-journald[1157]: System Journal (/var/log/journal/2dfce182812d4ead8bc0367ee23ddeeb) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:17:25.995582 systemd-journald[1157]: Received client request to flush runtime journal. Nov 8 00:17:25.577153 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:17:25.579284 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:17:25.581277 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:17:25.588774 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:17:25.602175 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:17:25.611028 udevadm[1214]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 8 00:17:25.644406 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Nov 8 00:17:25.644420 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Nov 8 00:17:25.651561 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:17:25.822358 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:17:25.875816 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:17:25.879451 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:17:25.894097 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:17:25.980635 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:17:25.993073 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:17:25.997419 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:17:26.028711 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Nov 8 00:17:26.028734 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Nov 8 00:17:26.035295 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:17:26.417527 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:17:26.428147 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:17:26.457788 systemd-udevd[1244]: Using default interface naming scheme 'v255'. Nov 8 00:17:26.480183 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:17:26.497995 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:17:26.513914 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1251) Nov 8 00:17:26.519019 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:17:26.535364 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 8 00:17:26.570537 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:17:26.613403 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:17:26.626872 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 8 00:17:26.641881 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 8 00:17:26.643095 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 8 00:17:26.646367 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 8 00:17:26.653053 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 8 00:17:26.665231 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 8 00:17:26.670863 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:17:26.681864 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:17:26.735340 systemd-networkd[1256]: lo: Link UP Nov 8 00:17:26.735569 systemd-networkd[1256]: lo: Gained carrier Nov 8 00:17:26.737384 systemd-networkd[1256]: Enumeration completed Nov 8 00:17:26.737891 systemd-networkd[1256]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:17:26.737896 systemd-networkd[1256]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:17:26.738653 systemd-networkd[1256]: eth0: Link UP Nov 8 00:17:26.738658 systemd-networkd[1256]: eth0: Gained carrier Nov 8 00:17:26.738670 systemd-networkd[1256]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:17:26.738724 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:17:26.741047 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:17:26.750739 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:17:26.754194 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:17:26.754552 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:17:26.760115 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:17:26.801179 systemd-networkd[1256]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 8 00:17:26.818152 kernel: kvm_amd: TSC scaling supported Nov 8 00:17:26.818258 kernel: kvm_amd: Nested Virtualization enabled Nov 8 00:17:26.818282 kernel: kvm_amd: Nested Paging enabled Nov 8 00:17:26.820152 kernel: kvm_amd: LBR virtualization supported Nov 8 00:17:26.820207 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 8 00:17:26.822280 kernel: kvm_amd: Virtual GIF supported Nov 8 00:17:26.844868 kernel: EDAC MC: Ver: 3.0.0 Nov 8 00:17:26.852106 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:17:26.885614 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:17:26.895178 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:17:26.906154 lvm[1294]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:17:26.936491 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:17:26.938740 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:17:26.951018 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:17:26.956034 lvm[1298]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:17:26.997158 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:17:26.999467 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:17:27.001470 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:17:27.001490 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:17:27.003113 systemd[1]: Reached target machines.target - Containers. Nov 8 00:17:27.005911 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:17:27.021022 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:17:27.024234 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:17:27.026043 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:17:27.027058 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:17:27.030188 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:17:27.034123 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:17:27.035611 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:17:27.051436 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:17:27.060390 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:17:27.062247 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:17:27.067874 kernel: loop0: detected capacity change from 0 to 142488 Nov 8 00:17:27.085913 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:17:27.112864 kernel: loop1: detected capacity change from 0 to 140768 Nov 8 00:17:27.210879 kernel: loop2: detected capacity change from 0 to 224512 Nov 8 00:17:27.256899 kernel: loop3: detected capacity change from 0 to 142488 Nov 8 00:17:27.271880 kernel: loop4: detected capacity change from 0 to 140768 Nov 8 00:17:27.281897 kernel: loop5: detected capacity change from 0 to 224512 Nov 8 00:17:27.287680 (sd-merge)[1318]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 8 00:17:27.288782 (sd-merge)[1318]: Merged extensions into '/usr'. Nov 8 00:17:27.304742 systemd[1]: Reloading requested from client PID 1306 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:17:27.304761 systemd[1]: Reloading... Nov 8 00:17:27.371944 zram_generator::config[1346]: No configuration found. Nov 8 00:17:27.448123 ldconfig[1303]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:17:27.524686 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:17:27.592721 systemd[1]: Reloading finished in 287 ms. Nov 8 00:17:27.614013 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:17:27.641183 systemd[1]: Starting ensure-sysext.service... Nov 8 00:17:27.644103 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:17:27.647988 systemd[1]: Reloading requested from client PID 1388 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:17:27.648008 systemd[1]: Reloading... Nov 8 00:17:27.702198 zram_generator::config[1421]: No configuration found. Nov 8 00:17:27.774838 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:17:27.775260 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:17:27.776326 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:17:27.776646 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Nov 8 00:17:27.776733 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Nov 8 00:17:27.780193 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:17:27.780205 systemd-tmpfiles[1389]: Skipping /boot Nov 8 00:17:27.792103 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:17:27.792119 systemd-tmpfiles[1389]: Skipping /boot Nov 8 00:17:27.853023 systemd-networkd[1256]: eth0: Gained IPv6LL Nov 8 00:17:27.858571 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:17:27.926755 systemd[1]: Reloading finished in 278 ms. Nov 8 00:17:27.945726 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:17:27.948574 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:17:27.960596 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:17:27.984147 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:17:27.987910 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:17:27.993137 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:17:27.998180 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:17:28.003791 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:17:28.011840 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:17:28.012162 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:17:28.022666 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:17:28.027210 augenrules[1488]: No rules Nov 8 00:17:28.027585 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:17:28.034143 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:17:28.036043 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:17:28.036195 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:17:28.037694 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:17:28.040511 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:17:28.043398 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:17:28.043688 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:17:28.046429 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:17:28.046653 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:17:28.049258 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:17:28.049532 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:17:28.070481 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:17:28.074259 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:17:28.074575 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:17:28.076370 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:17:28.080369 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:17:28.085950 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:17:28.090047 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:17:28.091937 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:17:28.096972 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:17:28.097168 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:17:28.108478 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:17:28.126816 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:17:28.127078 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:17:28.129619 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:17:28.129862 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:17:28.132625 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:17:28.132883 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:17:28.136574 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:17:28.146689 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:17:28.146908 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:17:28.151354 systemd-resolved[1477]: Positive Trust Anchors: Nov 8 00:17:28.151369 systemd-resolved[1477]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:17:28.151403 systemd-resolved[1477]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:17:28.152001 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:17:28.154965 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:17:28.155248 systemd-resolved[1477]: Defaulting to hostname 'linux'. Nov 8 00:17:28.158989 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:17:28.161015 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:17:28.163149 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:17:28.163213 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:17:28.163237 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:17:28.163526 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:17:28.166361 systemd[1]: Finished ensure-sysext.service. Nov 8 00:17:28.168313 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:17:28.168643 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:17:28.171402 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:17:28.171689 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:17:28.174196 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:17:28.174433 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:17:28.176885 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:17:28.177126 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:17:28.185132 systemd[1]: Reached target network.target - Network. Nov 8 00:17:28.186659 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:17:28.188462 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:17:28.190509 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:17:28.190588 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:17:28.203085 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:17:28.266903 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:17:28.269150 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:17:28.269682 systemd-timesyncd[1540]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 8 00:17:28.269744 systemd-timesyncd[1540]: Initial clock synchronization to Sat 2025-11-08 00:17:28.425754 UTC. Nov 8 00:17:28.271270 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:17:28.273313 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:17:28.275318 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:17:28.277397 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:17:28.277438 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:17:28.278948 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:17:28.280797 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:17:28.282614 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:17:28.284618 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:17:28.287001 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:17:28.290861 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:17:28.295601 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:17:28.298223 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:17:28.300030 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:17:28.301615 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:17:28.303344 systemd[1]: System is tainted: cgroupsv1 Nov 8 00:17:28.303387 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:17:28.303412 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:17:28.304726 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:17:28.307645 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 8 00:17:28.310557 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:17:28.315565 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:17:28.318573 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:17:28.320228 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:17:28.323105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:17:28.326238 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:17:28.331224 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:17:28.333280 jq[1548]: false Nov 8 00:17:28.338784 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:17:28.343875 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:17:28.349918 dbus-daemon[1546]: [system] SELinux support is enabled Nov 8 00:17:28.350002 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:17:28.351238 extend-filesystems[1550]: Found loop3 Nov 8 00:17:28.351238 extend-filesystems[1550]: Found loop4 Nov 8 00:17:28.351238 extend-filesystems[1550]: Found loop5 Nov 8 00:17:28.351238 extend-filesystems[1550]: Found sr0 Nov 8 00:17:28.351238 extend-filesystems[1550]: Found vda Nov 8 00:17:28.351238 extend-filesystems[1550]: Found vda1 Nov 8 00:17:28.351238 extend-filesystems[1550]: Found vda2 Nov 8 00:17:28.351238 extend-filesystems[1550]: Found vda3 Nov 8 00:17:28.351238 extend-filesystems[1550]: Found usr Nov 8 00:17:28.351238 extend-filesystems[1550]: Found vda4 Nov 8 00:17:28.351238 extend-filesystems[1550]: Found vda6 Nov 8 00:17:28.351238 extend-filesystems[1550]: Found vda7 Nov 8 00:17:28.351238 extend-filesystems[1550]: Found vda9 Nov 8 00:17:28.351238 extend-filesystems[1550]: Checking size of /dev/vda9 Nov 8 00:17:28.378827 extend-filesystems[1550]: Resized partition /dev/vda9 Nov 8 00:17:28.363640 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:17:28.369610 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:17:28.380507 extend-filesystems[1579]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:17:28.388956 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 8 00:17:28.384064 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:17:28.389036 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:17:28.398338 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:17:28.404894 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1259) Nov 8 00:17:28.419261 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:17:28.419618 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:17:28.420527 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:17:28.420859 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:17:28.421663 jq[1580]: true Nov 8 00:17:28.426474 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:17:28.433388 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:17:28.433686 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:17:28.444800 (ntainerd)[1594]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:17:28.450689 jq[1593]: true Nov 8 00:17:28.456156 sshd_keygen[1575]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:17:28.457443 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 8 00:17:28.457836 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 8 00:17:28.489731 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:17:28.513590 update_engine[1577]: I20251108 00:17:28.513475 1577 main.cc:92] Flatcar Update Engine starting Nov 8 00:17:28.516873 update_engine[1577]: I20251108 00:17:28.514950 1577 update_check_scheduler.cc:74] Next update check in 3m3s Nov 8 00:17:28.515042 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:17:28.516630 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:17:28.516731 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:17:28.516753 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:17:28.518758 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:17:28.518775 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:17:28.522061 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:17:28.524393 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:17:28.525337 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:17:28.531839 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:17:28.532288 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:17:28.535837 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:17:28.613427 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:17:28.633001 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 8 00:17:28.634508 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:17:28.659693 tar[1592]: linux-amd64/LICENSE Nov 8 00:17:28.638387 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:17:28.640329 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:17:28.644045 locksmithd[1636]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:17:28.660433 tar[1592]: linux-amd64/helm Nov 8 00:17:28.661053 systemd-logind[1568]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:17:28.661083 systemd-logind[1568]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:17:28.662297 systemd-logind[1568]: New seat seat0. Nov 8 00:17:28.663816 extend-filesystems[1579]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 8 00:17:28.663816 extend-filesystems[1579]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 8 00:17:28.663816 extend-filesystems[1579]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 8 00:17:28.690197 extend-filesystems[1550]: Resized filesystem in /dev/vda9 Nov 8 00:17:28.693219 bash[1635]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:17:28.668064 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:17:28.668428 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:17:28.676978 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:17:28.683303 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:17:28.695334 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 8 00:17:28.997725 containerd[1594]: time="2025-11-08T00:17:28.997526218Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:17:29.028555 containerd[1594]: time="2025-11-08T00:17:29.028491805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:17:29.034992 containerd[1594]: time="2025-11-08T00:17:29.034917938Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:17:29.034992 containerd[1594]: time="2025-11-08T00:17:29.034975904Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:17:29.034992 containerd[1594]: time="2025-11-08T00:17:29.034994086Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:17:29.035283 containerd[1594]: time="2025-11-08T00:17:29.035259926Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:17:29.035334 containerd[1594]: time="2025-11-08T00:17:29.035298913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:17:29.035445 containerd[1594]: time="2025-11-08T00:17:29.035403417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:17:29.035445 containerd[1594]: time="2025-11-08T00:17:29.035442620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:17:29.035764 containerd[1594]: time="2025-11-08T00:17:29.035731584Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:17:29.035764 containerd[1594]: time="2025-11-08T00:17:29.035752707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:17:29.035813 containerd[1594]: time="2025-11-08T00:17:29.035766742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:17:29.035813 containerd[1594]: time="2025-11-08T00:17:29.035779581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:17:29.035953 containerd[1594]: time="2025-11-08T00:17:29.035922215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:17:29.036224 containerd[1594]: time="2025-11-08T00:17:29.036203150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:17:29.036403 containerd[1594]: time="2025-11-08T00:17:29.036382311Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:17:29.036403 containerd[1594]: time="2025-11-08T00:17:29.036400064Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:17:29.036532 containerd[1594]: time="2025-11-08T00:17:29.036512697Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:17:29.036603 containerd[1594]: time="2025-11-08T00:17:29.036582573Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:17:29.153275 tar[1592]: linux-amd64/README.md Nov 8 00:17:29.168756 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:17:29.175677 containerd[1594]: time="2025-11-08T00:17:29.175598445Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:17:29.175780 containerd[1594]: time="2025-11-08T00:17:29.175714746Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:17:29.175780 containerd[1594]: time="2025-11-08T00:17:29.175746799Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:17:29.175841 containerd[1594]: time="2025-11-08T00:17:29.175769444Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:17:29.175841 containerd[1594]: time="2025-11-08T00:17:29.175815675Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:17:29.176098 containerd[1594]: time="2025-11-08T00:17:29.176060318Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:17:29.176677 containerd[1594]: time="2025-11-08T00:17:29.176617880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:17:29.176928 containerd[1594]: time="2025-11-08T00:17:29.176898469Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:17:29.176973 containerd[1594]: time="2025-11-08T00:17:29.176930644Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:17:29.176973 containerd[1594]: time="2025-11-08T00:17:29.176951256Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:17:29.177057 containerd[1594]: time="2025-11-08T00:17:29.176972278Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:17:29.177057 containerd[1594]: time="2025-11-08T00:17:29.177002338Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:17:29.177057 containerd[1594]: time="2025-11-08T00:17:29.177033052Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:17:29.177140 containerd[1594]: time="2025-11-08T00:17:29.177057609Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:17:29.177140 containerd[1594]: time="2025-11-08T00:17:29.177080396Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:17:29.177140 containerd[1594]: time="2025-11-08T00:17:29.177100621Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:17:29.177140 containerd[1594]: time="2025-11-08T00:17:29.177117414Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:17:29.177140 containerd[1594]: time="2025-11-08T00:17:29.177133338Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:17:29.177280 containerd[1594]: time="2025-11-08T00:17:29.177162224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:17:29.177280 containerd[1594]: time="2025-11-08T00:17:29.177180895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:17:29.177280 containerd[1594]: time="2025-11-08T00:17:29.177196892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:17:29.177280 containerd[1594]: time="2025-11-08T00:17:29.177212826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:17:29.177280 containerd[1594]: time="2025-11-08T00:17:29.177228249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:17:29.177280 containerd[1594]: time="2025-11-08T00:17:29.177253438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:17:29.177280 containerd[1594]: time="2025-11-08T00:17:29.177272334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:17:29.177490 containerd[1594]: time="2025-11-08T00:17:29.177292018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:17:29.177490 containerd[1594]: time="2025-11-08T00:17:29.177309750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:17:29.177490 containerd[1594]: time="2025-11-08T00:17:29.177329025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:17:29.177490 containerd[1594]: time="2025-11-08T00:17:29.177345398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:17:29.177490 containerd[1594]: time="2025-11-08T00:17:29.177373916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:17:29.177490 containerd[1594]: time="2025-11-08T00:17:29.177400791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:17:29.177490 containerd[1594]: time="2025-11-08T00:17:29.177422680Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:17:29.177490 containerd[1594]: time="2025-11-08T00:17:29.177449227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:17:29.177490 containerd[1594]: time="2025-11-08T00:17:29.177465192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:17:29.177490 containerd[1594]: time="2025-11-08T00:17:29.177481168Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:17:29.177783 containerd[1594]: time="2025-11-08T00:17:29.177548306Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:17:29.177783 containerd[1594]: time="2025-11-08T00:17:29.177575794Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:17:29.177783 containerd[1594]: time="2025-11-08T00:17:29.177591350Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:17:29.177783 containerd[1594]: time="2025-11-08T00:17:29.177608918Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:17:29.177783 containerd[1594]: time="2025-11-08T00:17:29.177623679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:17:29.177783 containerd[1594]: time="2025-11-08T00:17:29.177639827Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:17:29.177783 containerd[1594]: time="2025-11-08T00:17:29.177672227Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:17:29.177783 containerd[1594]: time="2025-11-08T00:17:29.177688100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:17:29.178144 containerd[1594]: time="2025-11-08T00:17:29.178044204Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:17:29.178327 containerd[1594]: time="2025-11-08T00:17:29.178156725Z" level=info msg="Connect containerd service" Nov 8 00:17:29.178327 containerd[1594]: time="2025-11-08T00:17:29.178211587Z" level=info msg="using legacy CRI server" Nov 8 00:17:29.178327 containerd[1594]: time="2025-11-08T00:17:29.178223967Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:17:29.178576 containerd[1594]: time="2025-11-08T00:17:29.178523584Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:17:29.180055 containerd[1594]: time="2025-11-08T00:17:29.180010796Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:17:29.180395 containerd[1594]: time="2025-11-08T00:17:29.180342058Z" level=info msg="Start subscribing containerd event" Nov 8 00:17:29.180395 containerd[1594]: time="2025-11-08T00:17:29.180400943Z" level=info msg="Start recovering state" Nov 8 00:17:29.180540 containerd[1594]: time="2025-11-08T00:17:29.180407675Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:17:29.180540 containerd[1594]: time="2025-11-08T00:17:29.180464926Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:17:29.180540 containerd[1594]: time="2025-11-08T00:17:29.180485875Z" level=info msg="Start event monitor" Nov 8 00:17:29.180540 containerd[1594]: time="2025-11-08T00:17:29.180529093Z" level=info msg="Start snapshots syncer" Nov 8 00:17:29.180665 containerd[1594]: time="2025-11-08T00:17:29.180541105Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:17:29.180665 containerd[1594]: time="2025-11-08T00:17:29.180550962Z" level=info msg="Start streaming server" Nov 8 00:17:29.180665 containerd[1594]: time="2025-11-08T00:17:29.180658468Z" level=info msg="containerd successfully booted in 0.185153s" Nov 8 00:17:29.180925 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:17:29.933512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:17:29.935835 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:17:29.937696 systemd[1]: Startup finished in 6.989s (kernel) + 5.415s (userspace) = 12.405s. Nov 8 00:17:29.938916 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:17:30.485437 kubelet[1680]: E1108 00:17:30.485371 1680 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:17:30.489626 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:17:30.490058 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:17:32.321165 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:17:32.332133 systemd[1]: Started sshd@0-10.0.0.26:22-10.0.0.1:42980.service - OpenSSH per-connection server daemon (10.0.0.1:42980). Nov 8 00:17:32.391605 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 42980 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:17:32.394116 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:17:32.403548 systemd-logind[1568]: New session 1 of user core. Nov 8 00:17:32.404721 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:17:32.414059 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:17:32.426547 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:17:32.429371 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:17:32.442112 (systemd)[1698]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:17:32.561368 systemd[1698]: Queued start job for default target default.target. Nov 8 00:17:32.561742 systemd[1698]: Created slice app.slice - User Application Slice. Nov 8 00:17:32.561765 systemd[1698]: Reached target paths.target - Paths. Nov 8 00:17:32.561778 systemd[1698]: Reached target timers.target - Timers. Nov 8 00:17:32.577940 systemd[1698]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:17:32.584726 systemd[1698]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:17:32.584792 systemd[1698]: Reached target sockets.target - Sockets. Nov 8 00:17:32.584807 systemd[1698]: Reached target basic.target - Basic System. Nov 8 00:17:32.584845 systemd[1698]: Reached target default.target - Main User Target. Nov 8 00:17:32.584892 systemd[1698]: Startup finished in 135ms. Nov 8 00:17:32.585923 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:17:32.588148 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:17:32.651231 systemd[1]: Started sshd@1-10.0.0.26:22-10.0.0.1:42992.service - OpenSSH per-connection server daemon (10.0.0.1:42992). Nov 8 00:17:32.689723 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 42992 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:17:32.691511 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:17:32.695840 systemd-logind[1568]: New session 2 of user core. Nov 8 00:17:32.706107 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:17:32.759785 sshd[1711]: pam_unix(sshd:session): session closed for user core Nov 8 00:17:32.768089 systemd[1]: Started sshd@2-10.0.0.26:22-10.0.0.1:43006.service - OpenSSH per-connection server daemon (10.0.0.1:43006). Nov 8 00:17:32.768558 systemd[1]: sshd@1-10.0.0.26:22-10.0.0.1:42992.service: Deactivated successfully. Nov 8 00:17:32.771217 systemd-logind[1568]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:17:32.772109 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:17:32.773539 systemd-logind[1568]: Removed session 2. Nov 8 00:17:32.799397 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 43006 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:17:32.801010 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:17:32.805009 systemd-logind[1568]: New session 3 of user core. Nov 8 00:17:32.815123 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:17:32.866651 sshd[1716]: pam_unix(sshd:session): session closed for user core Nov 8 00:17:32.875088 systemd[1]: Started sshd@3-10.0.0.26:22-10.0.0.1:43020.service - OpenSSH per-connection server daemon (10.0.0.1:43020). Nov 8 00:17:32.875561 systemd[1]: sshd@2-10.0.0.26:22-10.0.0.1:43006.service: Deactivated successfully. Nov 8 00:17:32.877979 systemd-logind[1568]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:17:32.878675 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:17:32.880230 systemd-logind[1568]: Removed session 3. Nov 8 00:17:32.906493 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 43020 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:17:32.908153 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:17:32.911995 systemd-logind[1568]: New session 4 of user core. Nov 8 00:17:32.922148 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:17:32.976907 sshd[1724]: pam_unix(sshd:session): session closed for user core Nov 8 00:17:32.985085 systemd[1]: Started sshd@4-10.0.0.26:22-10.0.0.1:35356.service - OpenSSH per-connection server daemon (10.0.0.1:35356). Nov 8 00:17:32.985547 systemd[1]: sshd@3-10.0.0.26:22-10.0.0.1:43020.service: Deactivated successfully. Nov 8 00:17:32.987910 systemd-logind[1568]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:17:32.988612 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:17:32.989746 systemd-logind[1568]: Removed session 4. Nov 8 00:17:33.016352 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 35356 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:17:33.018050 sshd[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:17:33.021959 systemd-logind[1568]: New session 5 of user core. Nov 8 00:17:33.032119 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:17:33.095053 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:17:33.095429 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:17:33.112515 sudo[1739]: pam_unix(sudo:session): session closed for user root Nov 8 00:17:33.115053 sshd[1732]: pam_unix(sshd:session): session closed for user core Nov 8 00:17:33.130171 systemd[1]: Started sshd@5-10.0.0.26:22-10.0.0.1:35370.service - OpenSSH per-connection server daemon (10.0.0.1:35370). Nov 8 00:17:33.130734 systemd[1]: sshd@4-10.0.0.26:22-10.0.0.1:35356.service: Deactivated successfully. Nov 8 00:17:33.133618 systemd-logind[1568]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:17:33.134512 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:17:33.135743 systemd-logind[1568]: Removed session 5. Nov 8 00:17:33.161525 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 35370 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:17:33.163279 sshd[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:17:33.167450 systemd-logind[1568]: New session 6 of user core. Nov 8 00:17:33.177177 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:17:33.233230 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:17:33.233588 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:17:33.237772 sudo[1749]: pam_unix(sudo:session): session closed for user root Nov 8 00:17:33.244732 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:17:33.245106 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:17:33.269120 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:17:33.271033 auditctl[1752]: No rules Nov 8 00:17:33.272495 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:17:33.272885 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:17:33.275195 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:17:33.309305 augenrules[1771]: No rules Nov 8 00:17:33.311530 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:17:33.313107 sudo[1748]: pam_unix(sudo:session): session closed for user root Nov 8 00:17:33.315170 sshd[1741]: pam_unix(sshd:session): session closed for user core Nov 8 00:17:33.328242 systemd[1]: Started sshd@6-10.0.0.26:22-10.0.0.1:35376.service - OpenSSH per-connection server daemon (10.0.0.1:35376). Nov 8 00:17:33.328781 systemd[1]: sshd@5-10.0.0.26:22-10.0.0.1:35370.service: Deactivated successfully. Nov 8 00:17:33.331562 systemd-logind[1568]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:17:33.332448 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:17:33.334072 systemd-logind[1568]: Removed session 6. Nov 8 00:17:33.362323 sshd[1777]: Accepted publickey for core from 10.0.0.1 port 35376 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:17:33.364185 sshd[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:17:33.368207 systemd-logind[1568]: New session 7 of user core. Nov 8 00:17:33.378108 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:17:33.432412 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:17:33.432766 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:17:34.027087 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:17:34.027352 (dockerd)[1802]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:17:34.527078 dockerd[1802]: time="2025-11-08T00:17:34.526923700Z" level=info msg="Starting up" Nov 8 00:17:35.329667 dockerd[1802]: time="2025-11-08T00:17:35.329607321Z" level=info msg="Loading containers: start." Nov 8 00:17:35.461888 kernel: Initializing XFRM netlink socket Nov 8 00:17:35.548126 systemd-networkd[1256]: docker0: Link UP Nov 8 00:17:35.572314 dockerd[1802]: time="2025-11-08T00:17:35.572245017Z" level=info msg="Loading containers: done." Nov 8 00:17:35.633396 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3431564077-merged.mount: Deactivated successfully. Nov 8 00:17:35.634459 dockerd[1802]: time="2025-11-08T00:17:35.634398625Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:17:35.634618 dockerd[1802]: time="2025-11-08T00:17:35.634585779Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:17:35.634779 dockerd[1802]: time="2025-11-08T00:17:35.634752122Z" level=info msg="Daemon has completed initialization" Nov 8 00:17:35.676746 dockerd[1802]: time="2025-11-08T00:17:35.676643574Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:17:35.676939 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:17:36.487198 containerd[1594]: time="2025-11-08T00:17:36.487147383Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 00:17:37.171618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3205398117.mount: Deactivated successfully. Nov 8 00:17:38.208173 containerd[1594]: time="2025-11-08T00:17:38.208095384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:38.208939 containerd[1594]: time="2025-11-08T00:17:38.208857573Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 8 00:17:38.209903 containerd[1594]: time="2025-11-08T00:17:38.209871408Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:38.213139 containerd[1594]: time="2025-11-08T00:17:38.213098393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:38.214328 containerd[1594]: time="2025-11-08T00:17:38.214279949Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.727087396s" Nov 8 00:17:38.214387 containerd[1594]: time="2025-11-08T00:17:38.214334913Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 8 00:17:38.215166 containerd[1594]: time="2025-11-08T00:17:38.215143548Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 00:17:39.638218 containerd[1594]: time="2025-11-08T00:17:39.638129238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:39.639307 containerd[1594]: time="2025-11-08T00:17:39.638903237Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 8 00:17:39.640381 containerd[1594]: time="2025-11-08T00:17:39.640332728Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:39.643067 containerd[1594]: time="2025-11-08T00:17:39.643032971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:39.644139 containerd[1594]: time="2025-11-08T00:17:39.644096519Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.428922554s" Nov 8 00:17:39.644181 containerd[1594]: time="2025-11-08T00:17:39.644138210Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 8 00:17:39.644926 containerd[1594]: time="2025-11-08T00:17:39.644896881Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 00:17:40.740183 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:17:40.751027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:17:40.944738 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:17:40.956214 (kubelet)[2029]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:17:41.274214 kubelet[2029]: E1108 00:17:41.274084 2029 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:17:41.280777 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:17:41.281122 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:17:41.291504 containerd[1594]: time="2025-11-08T00:17:41.291457706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:41.292171 containerd[1594]: time="2025-11-08T00:17:41.292134098Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 8 00:17:41.293572 containerd[1594]: time="2025-11-08T00:17:41.293530535Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:41.296591 containerd[1594]: time="2025-11-08T00:17:41.296554658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:41.297766 containerd[1594]: time="2025-11-08T00:17:41.297740296Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.652810565s" Nov 8 00:17:41.297830 containerd[1594]: time="2025-11-08T00:17:41.297769414Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 8 00:17:41.298489 containerd[1594]: time="2025-11-08T00:17:41.298336857Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 00:17:43.341694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3537931287.mount: Deactivated successfully. Nov 8 00:17:44.138378 containerd[1594]: time="2025-11-08T00:17:44.138299866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:44.139183 containerd[1594]: time="2025-11-08T00:17:44.139141983Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 8 00:17:44.140744 containerd[1594]: time="2025-11-08T00:17:44.140650477Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:44.142873 containerd[1594]: time="2025-11-08T00:17:44.142801872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:44.143563 containerd[1594]: time="2025-11-08T00:17:44.143521368Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.845142557s" Nov 8 00:17:44.143563 containerd[1594]: time="2025-11-08T00:17:44.143559449Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 8 00:17:44.144199 containerd[1594]: time="2025-11-08T00:17:44.144164530Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 00:17:44.948828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2827321919.mount: Deactivated successfully. Nov 8 00:17:45.911778 containerd[1594]: time="2025-11-08T00:17:45.911707264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:45.912778 containerd[1594]: time="2025-11-08T00:17:45.912730331Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 8 00:17:45.914159 containerd[1594]: time="2025-11-08T00:17:45.914097703Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:45.917427 containerd[1594]: time="2025-11-08T00:17:45.917389842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:45.918429 containerd[1594]: time="2025-11-08T00:17:45.918399010Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.774207974s" Nov 8 00:17:45.918482 containerd[1594]: time="2025-11-08T00:17:45.918430201Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 8 00:17:45.919273 containerd[1594]: time="2025-11-08T00:17:45.919249477Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:17:46.539087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1112483904.mount: Deactivated successfully. Nov 8 00:17:46.545907 containerd[1594]: time="2025-11-08T00:17:46.545866448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:46.546607 containerd[1594]: time="2025-11-08T00:17:46.546564258Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 8 00:17:46.547664 containerd[1594]: time="2025-11-08T00:17:46.547628713Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:46.550079 containerd[1594]: time="2025-11-08T00:17:46.550028483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:46.550764 containerd[1594]: time="2025-11-08T00:17:46.550719497Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 631.442477ms" Nov 8 00:17:46.550764 containerd[1594]: time="2025-11-08T00:17:46.550754864Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:17:46.551386 containerd[1594]: time="2025-11-08T00:17:46.551239938Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 00:17:47.206528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1567702093.mount: Deactivated successfully. Nov 8 00:17:49.477747 containerd[1594]: time="2025-11-08T00:17:49.477655336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:49.478516 containerd[1594]: time="2025-11-08T00:17:49.478476692Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 8 00:17:49.479994 containerd[1594]: time="2025-11-08T00:17:49.479942103Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:49.483393 containerd[1594]: time="2025-11-08T00:17:49.483306763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:49.484718 containerd[1594]: time="2025-11-08T00:17:49.484671450Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.933406649s" Nov 8 00:17:49.484718 containerd[1594]: time="2025-11-08T00:17:49.484710586Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 8 00:17:51.531316 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:17:51.545073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:17:51.642072 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:17:51.642247 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:17:51.642714 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:17:51.654107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:17:51.680394 systemd[1]: Reloading requested from client PID 2193 ('systemctl') (unit session-7.scope)... Nov 8 00:17:51.680413 systemd[1]: Reloading... Nov 8 00:17:51.755897 zram_generator::config[2233]: No configuration found. Nov 8 00:17:52.352367 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:17:52.428202 systemd[1]: Reloading finished in 747 ms. Nov 8 00:17:52.478163 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:17:52.478279 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:17:52.478671 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:17:52.480822 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:17:52.657046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:17:52.658463 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:17:52.705494 kubelet[2292]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:17:52.705494 kubelet[2292]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:17:52.705494 kubelet[2292]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:17:52.706000 kubelet[2292]: I1108 00:17:52.705571 2292 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:17:53.242328 kubelet[2292]: I1108 00:17:53.242260 2292 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:17:53.242328 kubelet[2292]: I1108 00:17:53.242309 2292 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:17:53.242630 kubelet[2292]: I1108 00:17:53.242604 2292 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:17:53.269713 kubelet[2292]: E1108 00:17:53.269667 2292 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:17:53.272601 kubelet[2292]: I1108 00:17:53.272560 2292 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:17:53.281429 kubelet[2292]: E1108 00:17:53.281367 2292 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:17:53.281511 kubelet[2292]: I1108 00:17:53.281433 2292 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:17:53.287107 kubelet[2292]: I1108 00:17:53.287061 2292 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:17:53.287727 kubelet[2292]: I1108 00:17:53.287680 2292 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:17:53.287932 kubelet[2292]: I1108 00:17:53.287716 2292 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:17:53.288052 kubelet[2292]: I1108 00:17:53.287942 2292 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:17:53.288052 kubelet[2292]: I1108 00:17:53.287953 2292 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:17:53.288172 kubelet[2292]: I1108 00:17:53.288143 2292 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:17:53.290695 kubelet[2292]: I1108 00:17:53.290666 2292 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:17:53.290721 kubelet[2292]: I1108 00:17:53.290710 2292 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:17:53.290753 kubelet[2292]: I1108 00:17:53.290735 2292 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:17:53.290753 kubelet[2292]: I1108 00:17:53.290749 2292 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:17:53.293318 kubelet[2292]: I1108 00:17:53.293276 2292 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:17:53.293690 kubelet[2292]: I1108 00:17:53.293672 2292 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:17:53.293763 kubelet[2292]: W1108 00:17:53.293748 2292 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:17:53.294266 kubelet[2292]: W1108 00:17:53.294210 2292 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Nov 8 00:17:53.294308 kubelet[2292]: E1108 00:17:53.294293 2292 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:17:53.294876 kubelet[2292]: W1108 00:17:53.294605 2292 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Nov 8 00:17:53.294876 kubelet[2292]: E1108 00:17:53.294640 2292 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:17:53.297989 kubelet[2292]: I1108 00:17:53.297957 2292 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:17:53.298051 kubelet[2292]: I1108 00:17:53.298003 2292 server.go:1287] "Started kubelet" Nov 8 00:17:53.298197 kubelet[2292]: I1108 00:17:53.298126 2292 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:17:53.298343 kubelet[2292]: I1108 00:17:53.298299 2292 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:17:53.298678 kubelet[2292]: I1108 00:17:53.298653 2292 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:17:53.299305 kubelet[2292]: I1108 00:17:53.299270 2292 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:17:53.300935 kubelet[2292]: I1108 00:17:53.300912 2292 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:17:53.301688 kubelet[2292]: I1108 00:17:53.301657 2292 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:17:53.303710 kubelet[2292]: I1108 00:17:53.303685 2292 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:17:53.303774 kubelet[2292]: I1108 00:17:53.303765 2292 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:17:53.304883 kubelet[2292]: I1108 00:17:53.303828 2292 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:17:53.304883 kubelet[2292]: W1108 00:17:53.304078 2292 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Nov 8 00:17:53.304883 kubelet[2292]: E1108 00:17:53.304108 2292 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:17:53.304883 kubelet[2292]: E1108 00:17:53.304249 2292 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:17:53.304883 kubelet[2292]: E1108 00:17:53.304570 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="200ms" Nov 8 00:17:53.304883 kubelet[2292]: I1108 00:17:53.304723 2292 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:17:53.304883 kubelet[2292]: I1108 00:17:53.304794 2292 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:17:53.305564 kubelet[2292]: E1108 00:17:53.305524 2292 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:53.306003 kubelet[2292]: I1108 00:17:53.305981 2292 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:17:53.317095 kubelet[2292]: E1108 00:17:53.315575 2292 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.26:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.26:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875dff52cdb6f68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-08 00:17:53.297977192 +0000 UTC m=+0.631873764,LastTimestamp:2025-11-08 00:17:53.297977192 +0000 UTC m=+0.631873764,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 8 00:17:53.329797 kubelet[2292]: I1108 00:17:53.329764 2292 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:17:53.329930 kubelet[2292]: I1108 00:17:53.329805 2292 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:17:53.329930 kubelet[2292]: I1108 00:17:53.329833 2292 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:17:53.330016 kubelet[2292]: I1108 00:17:53.329964 2292 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:17:53.332053 kubelet[2292]: I1108 00:17:53.332024 2292 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:17:53.332093 kubelet[2292]: I1108 00:17:53.332069 2292 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:17:53.332131 kubelet[2292]: I1108 00:17:53.332093 2292 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:17:53.332131 kubelet[2292]: I1108 00:17:53.332102 2292 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:17:53.332168 kubelet[2292]: E1108 00:17:53.332155 2292 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:17:53.333327 kubelet[2292]: W1108 00:17:53.332649 2292 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Nov 8 00:17:53.333327 kubelet[2292]: E1108 00:17:53.332724 2292 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:17:53.406246 kubelet[2292]: E1108 00:17:53.406213 2292 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:53.432454 kubelet[2292]: E1108 00:17:53.432396 2292 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:17:53.505195 kubelet[2292]: E1108 00:17:53.505033 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="400ms" Nov 8 00:17:53.507142 kubelet[2292]: E1108 00:17:53.507095 2292 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:53.607843 kubelet[2292]: E1108 00:17:53.607775 2292 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:53.633190 kubelet[2292]: E1108 00:17:53.633108 2292 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:17:53.708867 kubelet[2292]: E1108 00:17:53.708791 2292 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:53.810055 kubelet[2292]: E1108 00:17:53.809879 2292 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:53.905984 kubelet[2292]: E1108 00:17:53.905908 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="800ms" Nov 8 00:17:53.911050 kubelet[2292]: E1108 00:17:53.910988 2292 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:53.917296 kubelet[2292]: I1108 00:17:53.917247 2292 policy_none.go:49] "None policy: Start" Nov 8 00:17:53.917296 kubelet[2292]: I1108 00:17:53.917301 2292 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:17:53.917374 kubelet[2292]: I1108 00:17:53.917327 2292 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:17:53.971959 kubelet[2292]: I1108 00:17:53.971909 2292 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:17:53.972224 kubelet[2292]: I1108 00:17:53.972201 2292 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:17:53.972289 kubelet[2292]: I1108 00:17:53.972224 2292 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:17:53.973445 kubelet[2292]: I1108 00:17:53.973148 2292 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:17:53.973978 kubelet[2292]: E1108 00:17:53.973957 2292 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:17:53.974037 kubelet[2292]: E1108 00:17:53.974003 2292 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 8 00:17:54.040281 kubelet[2292]: E1108 00:17:54.040211 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:17:54.041649 kubelet[2292]: E1108 00:17:54.041620 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:17:54.043260 kubelet[2292]: E1108 00:17:54.043229 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:17:54.074125 kubelet[2292]: I1108 00:17:54.073983 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:17:54.074527 kubelet[2292]: E1108 00:17:54.074476 2292 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Nov 8 00:17:54.107929 kubelet[2292]: I1108 00:17:54.107885 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:17:54.107929 kubelet[2292]: I1108 00:17:54.107914 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:17:54.107929 kubelet[2292]: I1108 00:17:54.107934 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:17:54.108072 kubelet[2292]: I1108 00:17:54.107949 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea90a9c9408e07a55fa353d543320466-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ea90a9c9408e07a55fa353d543320466\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:17:54.108072 kubelet[2292]: I1108 00:17:54.107964 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:17:54.108072 kubelet[2292]: I1108 00:17:54.107981 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:17:54.108072 kubelet[2292]: I1108 00:17:54.107998 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:17:54.108072 kubelet[2292]: I1108 00:17:54.108032 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea90a9c9408e07a55fa353d543320466-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ea90a9c9408e07a55fa353d543320466\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:17:54.108238 kubelet[2292]: I1108 00:17:54.108051 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea90a9c9408e07a55fa353d543320466-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ea90a9c9408e07a55fa353d543320466\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:17:54.108430 kubelet[2292]: W1108 00:17:54.108384 2292 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Nov 8 00:17:54.108482 kubelet[2292]: E1108 00:17:54.108432 2292 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:17:54.277203 kubelet[2292]: I1108 00:17:54.277150 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:17:54.277708 kubelet[2292]: E1108 00:17:54.277652 2292 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Nov 8 00:17:54.341278 kubelet[2292]: E1108 00:17:54.341124 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:54.341938 containerd[1594]: time="2025-11-08T00:17:54.341879664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 8 00:17:54.342671 kubelet[2292]: E1108 00:17:54.342209 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:54.343052 containerd[1594]: time="2025-11-08T00:17:54.343002596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 8 00:17:54.344061 kubelet[2292]: E1108 00:17:54.344008 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:54.344466 containerd[1594]: time="2025-11-08T00:17:54.344406468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ea90a9c9408e07a55fa353d543320466,Namespace:kube-system,Attempt:0,}" Nov 8 00:17:54.580440 kubelet[2292]: W1108 00:17:54.580348 2292 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Nov 8 00:17:54.580440 kubelet[2292]: E1108 00:17:54.580418 2292 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:17:54.584225 kubelet[2292]: W1108 00:17:54.584161 2292 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Nov 8 00:17:54.584388 kubelet[2292]: E1108 00:17:54.584230 2292 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:17:54.609582 kubelet[2292]: W1108 00:17:54.609399 2292 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Nov 8 00:17:54.609582 kubelet[2292]: E1108 00:17:54.609492 2292 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:17:54.679116 kubelet[2292]: I1108 00:17:54.679064 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:17:54.679604 kubelet[2292]: E1108 00:17:54.679560 2292 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Nov 8 00:17:54.707624 kubelet[2292]: E1108 00:17:54.707561 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="1.6s" Nov 8 00:17:55.368765 kubelet[2292]: E1108 00:17:55.368704 2292 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:17:55.481999 kubelet[2292]: I1108 00:17:55.481945 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:17:55.482371 kubelet[2292]: E1108 00:17:55.482318 2292 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Nov 8 00:17:56.194664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3823984181.mount: Deactivated successfully. Nov 8 00:17:56.201774 containerd[1594]: time="2025-11-08T00:17:56.201710349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:17:56.203954 containerd[1594]: time="2025-11-08T00:17:56.203892836Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:17:56.205163 containerd[1594]: time="2025-11-08T00:17:56.205126331Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:17:56.205965 containerd[1594]: time="2025-11-08T00:17:56.205938304Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:17:56.206903 containerd[1594]: time="2025-11-08T00:17:56.206871999Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:17:56.207911 containerd[1594]: time="2025-11-08T00:17:56.207872384Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:17:56.209082 containerd[1594]: time="2025-11-08T00:17:56.209051509Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:17:56.211547 containerd[1594]: time="2025-11-08T00:17:56.211502262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:17:56.213309 containerd[1594]: time="2025-11-08T00:17:56.213274874Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.871274231s" Nov 8 00:17:56.213917 containerd[1594]: time="2025-11-08T00:17:56.213885211Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.869397054s" Nov 8 00:17:56.214516 containerd[1594]: time="2025-11-08T00:17:56.214482979Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.871373729s" Nov 8 00:17:56.308995 kubelet[2292]: E1108 00:17:56.308935 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="3.2s" Nov 8 00:17:56.780906 containerd[1594]: time="2025-11-08T00:17:56.780772024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:56.782322 containerd[1594]: time="2025-11-08T00:17:56.780921282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:56.782322 containerd[1594]: time="2025-11-08T00:17:56.780956457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:56.782322 containerd[1594]: time="2025-11-08T00:17:56.781124521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:56.782432 containerd[1594]: time="2025-11-08T00:17:56.782300609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:56.782432 containerd[1594]: time="2025-11-08T00:17:56.782349065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:56.782432 containerd[1594]: time="2025-11-08T00:17:56.782360663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:56.784052 containerd[1594]: time="2025-11-08T00:17:56.783978232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:56.789984 containerd[1594]: time="2025-11-08T00:17:56.789593243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:56.789984 containerd[1594]: time="2025-11-08T00:17:56.789734020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:56.789984 containerd[1594]: time="2025-11-08T00:17:56.789802766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:56.789984 containerd[1594]: time="2025-11-08T00:17:56.789984092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:56.885967 containerd[1594]: time="2025-11-08T00:17:56.885918475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"121c6e0d4e9edae147dc447c42d63e7f2d21c443f8d626ef982362e4cf9e0c3c\"" Nov 8 00:17:56.886810 kubelet[2292]: E1108 00:17:56.886784 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:56.893433 containerd[1594]: time="2025-11-08T00:17:56.893394559Z" level=info msg="CreateContainer within sandbox \"121c6e0d4e9edae147dc447c42d63e7f2d21c443f8d626ef982362e4cf9e0c3c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:17:56.899548 containerd[1594]: time="2025-11-08T00:17:56.899511095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"f008462d239c2a472596840c77e4d997eaed58549c6043d106eaf98aebc49f98\"" Nov 8 00:17:56.900527 kubelet[2292]: E1108 00:17:56.900260 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:56.902619 containerd[1594]: time="2025-11-08T00:17:56.902582259Z" level=info msg="CreateContainer within sandbox \"f008462d239c2a472596840c77e4d997eaed58549c6043d106eaf98aebc49f98\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:17:56.905583 containerd[1594]: time="2025-11-08T00:17:56.905544280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ea90a9c9408e07a55fa353d543320466,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecf6830b3927f03d592f8bd947521aeaefebf11682025975aa76ad00740c8087\"" Nov 8 00:17:56.906419 kubelet[2292]: E1108 00:17:56.906389 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:56.908312 containerd[1594]: time="2025-11-08T00:17:56.908281261Z" level=info msg="CreateContainer within sandbox \"ecf6830b3927f03d592f8bd947521aeaefebf11682025975aa76ad00740c8087\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:17:57.003882 containerd[1594]: time="2025-11-08T00:17:57.003793023Z" level=info msg="CreateContainer within sandbox \"121c6e0d4e9edae147dc447c42d63e7f2d21c443f8d626ef982362e4cf9e0c3c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9c9075519c8d32acf62964a187c32561a84d1ee00460c036dfaecc0cf0c9c5b5\"" Nov 8 00:17:57.004647 containerd[1594]: time="2025-11-08T00:17:57.004583662Z" level=info msg="StartContainer for \"9c9075519c8d32acf62964a187c32561a84d1ee00460c036dfaecc0cf0c9c5b5\"" Nov 8 00:17:57.012628 containerd[1594]: time="2025-11-08T00:17:57.012589130Z" level=info msg="CreateContainer within sandbox \"ecf6830b3927f03d592f8bd947521aeaefebf11682025975aa76ad00740c8087\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"221a226a0076a88ae92e8055ff95e3fcbd93dadd30a236941349e89a47467b90\"" Nov 8 00:17:57.013256 containerd[1594]: time="2025-11-08T00:17:57.013232905Z" level=info msg="StartContainer for \"221a226a0076a88ae92e8055ff95e3fcbd93dadd30a236941349e89a47467b90\"" Nov 8 00:17:57.013576 containerd[1594]: time="2025-11-08T00:17:57.013534520Z" level=info msg="CreateContainer within sandbox \"f008462d239c2a472596840c77e4d997eaed58549c6043d106eaf98aebc49f98\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"04e7f569d98268101af8a5bacb956bf6d0da8046ea2f923d666bd7ec660992ce\"" Nov 8 00:17:57.014314 containerd[1594]: time="2025-11-08T00:17:57.014187447Z" level=info msg="StartContainer for \"04e7f569d98268101af8a5bacb956bf6d0da8046ea2f923d666bd7ec660992ce\"" Nov 8 00:17:57.053520 kubelet[2292]: W1108 00:17:57.053392 2292 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Nov 8 00:17:57.053520 kubelet[2292]: E1108 00:17:57.053442 2292 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:17:57.083783 kubelet[2292]: I1108 00:17:57.083696 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:17:57.084275 kubelet[2292]: E1108 00:17:57.084183 2292 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Nov 8 00:17:57.099882 containerd[1594]: time="2025-11-08T00:17:57.098353920Z" level=info msg="StartContainer for \"9c9075519c8d32acf62964a187c32561a84d1ee00460c036dfaecc0cf0c9c5b5\" returns successfully" Nov 8 00:17:57.107553 containerd[1594]: time="2025-11-08T00:17:57.107500877Z" level=info msg="StartContainer for \"04e7f569d98268101af8a5bacb956bf6d0da8046ea2f923d666bd7ec660992ce\" returns successfully" Nov 8 00:17:57.107692 containerd[1594]: time="2025-11-08T00:17:57.107571081Z" level=info msg="StartContainer for \"221a226a0076a88ae92e8055ff95e3fcbd93dadd30a236941349e89a47467b90\" returns successfully" Nov 8 00:17:57.343273 kubelet[2292]: E1108 00:17:57.342779 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:17:57.343273 kubelet[2292]: E1108 00:17:57.342926 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:57.346913 kubelet[2292]: E1108 00:17:57.346891 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:17:57.347354 kubelet[2292]: E1108 00:17:57.347141 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:57.349300 kubelet[2292]: E1108 00:17:57.349231 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:17:57.349890 kubelet[2292]: E1108 00:17:57.349449 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:58.353959 kubelet[2292]: E1108 00:17:58.352812 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:17:58.353959 kubelet[2292]: E1108 00:17:58.353007 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:58.355282 kubelet[2292]: E1108 00:17:58.355251 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:17:58.355421 kubelet[2292]: E1108 00:17:58.355396 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:58.816586 kubelet[2292]: E1108 00:17:58.816445 2292 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 8 00:17:58.914380 kubelet[2292]: E1108 00:17:58.914348 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:17:58.914502 kubelet[2292]: E1108 00:17:58.914487 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:59.166938 kubelet[2292]: E1108 00:17:59.166732 2292 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 8 00:17:59.513716 kubelet[2292]: E1108 00:17:59.513561 2292 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 8 00:17:59.598831 kubelet[2292]: E1108 00:17:59.598777 2292 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 8 00:17:59.988687 kubelet[2292]: E1108 00:17:59.988553 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:17:59.988787 kubelet[2292]: E1108 00:17:59.988750 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:00.285741 kubelet[2292]: I1108 00:18:00.285601 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:18:00.606770 kubelet[2292]: I1108 00:18:00.606607 2292 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:18:00.606770 kubelet[2292]: E1108 00:18:00.606668 2292 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 8 00:18:00.616658 kubelet[2292]: E1108 00:18:00.616611 2292 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:18:00.717105 kubelet[2292]: E1108 00:18:00.717042 2292 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:18:00.718795 systemd[1]: Reloading requested from client PID 2573 ('systemctl') (unit session-7.scope)... Nov 8 00:18:00.718811 systemd[1]: Reloading... Nov 8 00:18:00.798598 kubelet[2292]: E1108 00:18:00.798414 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:18:00.798598 kubelet[2292]: E1108 00:18:00.798538 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:00.805878 zram_generator::config[2612]: No configuration found. Nov 8 00:18:00.818121 kubelet[2292]: E1108 00:18:00.818086 2292 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:18:00.918833 kubelet[2292]: E1108 00:18:00.918685 2292 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:18:00.942899 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:18:01.019691 kubelet[2292]: E1108 00:18:01.019634 2292 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:18:01.027096 systemd[1]: Reloading finished in 307 ms. Nov 8 00:18:01.065016 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:18:01.090311 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:18:01.090712 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:18:01.097061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:18:01.265074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:18:01.271347 (kubelet)[2667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:18:01.307416 kubelet[2667]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:18:01.307416 kubelet[2667]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:18:01.307416 kubelet[2667]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:18:01.307867 kubelet[2667]: I1108 00:18:01.307483 2667 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:18:01.314739 kubelet[2667]: I1108 00:18:01.314695 2667 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:18:01.314739 kubelet[2667]: I1108 00:18:01.314721 2667 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:18:01.315013 kubelet[2667]: I1108 00:18:01.314996 2667 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:18:01.316156 kubelet[2667]: I1108 00:18:01.316125 2667 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 00:18:01.318224 kubelet[2667]: I1108 00:18:01.318203 2667 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:18:01.320642 kubelet[2667]: E1108 00:18:01.320618 2667 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:18:01.320642 kubelet[2667]: I1108 00:18:01.320645 2667 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:18:01.326505 kubelet[2667]: I1108 00:18:01.326461 2667 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:18:01.327144 kubelet[2667]: I1108 00:18:01.327095 2667 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:18:01.327301 kubelet[2667]: I1108 00:18:01.327124 2667 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:18:01.327422 kubelet[2667]: I1108 00:18:01.327303 2667 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:18:01.327422 kubelet[2667]: I1108 00:18:01.327317 2667 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:18:01.327422 kubelet[2667]: I1108 00:18:01.327380 2667 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:18:01.327596 kubelet[2667]: I1108 00:18:01.327566 2667 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:18:01.327596 kubelet[2667]: I1108 00:18:01.327594 2667 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:18:01.327659 kubelet[2667]: I1108 00:18:01.327616 2667 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:18:01.327659 kubelet[2667]: I1108 00:18:01.327629 2667 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:18:01.330995 kubelet[2667]: I1108 00:18:01.328919 2667 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:18:01.330995 kubelet[2667]: I1108 00:18:01.329475 2667 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:18:01.330995 kubelet[2667]: I1108 00:18:01.330023 2667 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:18:01.330995 kubelet[2667]: I1108 00:18:01.330050 2667 server.go:1287] "Started kubelet" Nov 8 00:18:01.331953 kubelet[2667]: I1108 00:18:01.331902 2667 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:18:01.332219 kubelet[2667]: I1108 00:18:01.332199 2667 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:18:01.332292 kubelet[2667]: I1108 00:18:01.332250 2667 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:18:01.334597 kubelet[2667]: I1108 00:18:01.334578 2667 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:18:01.336600 kubelet[2667]: I1108 00:18:01.334804 2667 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:18:01.336689 kubelet[2667]: I1108 00:18:01.336669 2667 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:18:01.336758 kubelet[2667]: I1108 00:18:01.336738 2667 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:18:01.337292 kubelet[2667]: E1108 00:18:01.337259 2667 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:18:01.337690 kubelet[2667]: I1108 00:18:01.337673 2667 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:18:01.338902 kubelet[2667]: I1108 00:18:01.338883 2667 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:18:01.338985 kubelet[2667]: I1108 00:18:01.338956 2667 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:18:01.340334 kubelet[2667]: I1108 00:18:01.340256 2667 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:18:01.340618 kubelet[2667]: E1108 00:18:01.340598 2667 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:18:01.343929 kubelet[2667]: I1108 00:18:01.343883 2667 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:18:01.350720 kubelet[2667]: I1108 00:18:01.350558 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:18:01.351818 kubelet[2667]: I1108 00:18:01.351800 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:18:01.351948 kubelet[2667]: I1108 00:18:01.351932 2667 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:18:01.352033 kubelet[2667]: I1108 00:18:01.352018 2667 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:18:01.352096 kubelet[2667]: I1108 00:18:01.352085 2667 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:18:01.352401 kubelet[2667]: E1108 00:18:01.352204 2667 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:18:01.392567 kubelet[2667]: I1108 00:18:01.392536 2667 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:18:01.392567 kubelet[2667]: I1108 00:18:01.392554 2667 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:18:01.392567 kubelet[2667]: I1108 00:18:01.392577 2667 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:18:01.392788 kubelet[2667]: I1108 00:18:01.392748 2667 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:18:01.392788 kubelet[2667]: I1108 00:18:01.392764 2667 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:18:01.392788 kubelet[2667]: I1108 00:18:01.392786 2667 policy_none.go:49] "None policy: Start" Nov 8 00:18:01.392912 kubelet[2667]: I1108 00:18:01.392799 2667 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:18:01.392912 kubelet[2667]: I1108 00:18:01.392813 2667 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:18:01.393145 kubelet[2667]: I1108 00:18:01.392966 2667 state_mem.go:75] "Updated machine memory state" Nov 8 00:18:01.395878 kubelet[2667]: I1108 00:18:01.394523 2667 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:18:01.395878 kubelet[2667]: I1108 00:18:01.394735 2667 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:18:01.395878 kubelet[2667]: I1108 00:18:01.394750 2667 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:18:01.395878 kubelet[2667]: I1108 00:18:01.395122 2667 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:18:01.395878 kubelet[2667]: E1108 00:18:01.395478 2667 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:18:01.453141 kubelet[2667]: I1108 00:18:01.453105 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:18:01.453141 kubelet[2667]: I1108 00:18:01.453126 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:18:01.453318 kubelet[2667]: I1108 00:18:01.453157 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:18:01.501930 kubelet[2667]: I1108 00:18:01.500840 2667 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:18:01.508092 kubelet[2667]: I1108 00:18:01.508061 2667 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 8 00:18:01.508204 kubelet[2667]: I1108 00:18:01.508153 2667 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:18:01.541673 kubelet[2667]: I1108 00:18:01.540984 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:18:01.541673 kubelet[2667]: I1108 00:18:01.541027 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea90a9c9408e07a55fa353d543320466-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ea90a9c9408e07a55fa353d543320466\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:18:01.541673 kubelet[2667]: I1108 00:18:01.541052 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:18:01.541673 kubelet[2667]: I1108 00:18:01.541072 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:18:01.541673 kubelet[2667]: I1108 00:18:01.541094 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:18:01.541972 kubelet[2667]: I1108 00:18:01.541116 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea90a9c9408e07a55fa353d543320466-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ea90a9c9408e07a55fa353d543320466\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:18:01.541972 kubelet[2667]: I1108 00:18:01.541138 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea90a9c9408e07a55fa353d543320466-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ea90a9c9408e07a55fa353d543320466\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:18:01.541972 kubelet[2667]: I1108 00:18:01.541160 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:18:01.541972 kubelet[2667]: I1108 00:18:01.541180 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:18:01.783388 kubelet[2667]: E1108 00:18:01.783334 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:01.783605 kubelet[2667]: E1108 00:18:01.783334 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:01.783605 kubelet[2667]: E1108 00:18:01.783349 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:02.328971 kubelet[2667]: I1108 00:18:02.328890 2667 apiserver.go:52] "Watching apiserver" Nov 8 00:18:02.337627 kubelet[2667]: I1108 00:18:02.337584 2667 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:18:02.366983 kubelet[2667]: I1108 00:18:02.366661 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:18:02.366983 kubelet[2667]: E1108 00:18:02.366685 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:02.366983 kubelet[2667]: I1108 00:18:02.366688 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:18:02.414916 kubelet[2667]: E1108 00:18:02.414858 2667 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 8 00:18:02.414916 kubelet[2667]: E1108 00:18:02.414862 2667 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:18:02.415172 kubelet[2667]: E1108 00:18:02.415100 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:02.415172 kubelet[2667]: E1108 00:18:02.415163 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:02.507749 kubelet[2667]: I1108 00:18:02.507662 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.507642162 podStartE2EDuration="1.507642162s" podCreationTimestamp="2025-11-08 00:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:18:02.507409551 +0000 UTC m=+1.231801833" watchObservedRunningTime="2025-11-08 00:18:02.507642162 +0000 UTC m=+1.232034444" Nov 8 00:18:02.523206 kubelet[2667]: I1108 00:18:02.523107 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.523059945 podStartE2EDuration="1.523059945s" podCreationTimestamp="2025-11-08 00:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:18:02.516800257 +0000 UTC m=+1.241192539" watchObservedRunningTime="2025-11-08 00:18:02.523059945 +0000 UTC m=+1.247452227" Nov 8 00:18:02.523419 kubelet[2667]: I1108 00:18:02.523386 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.523379926 podStartE2EDuration="1.523379926s" podCreationTimestamp="2025-11-08 00:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:18:02.523245576 +0000 UTC m=+1.247637858" watchObservedRunningTime="2025-11-08 00:18:02.523379926 +0000 UTC m=+1.247772208" Nov 8 00:18:03.367473 kubelet[2667]: E1108 00:18:03.367433 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:03.367473 kubelet[2667]: E1108 00:18:03.367458 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:03.368044 kubelet[2667]: E1108 00:18:03.367565 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:04.369201 kubelet[2667]: E1108 00:18:04.369157 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:05.670286 kubelet[2667]: E1108 00:18:05.670239 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:06.234076 kubelet[2667]: I1108 00:18:06.234035 2667 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:18:06.234629 containerd[1594]: time="2025-11-08T00:18:06.234569682Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:18:06.235123 kubelet[2667]: I1108 00:18:06.234946 2667 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:18:06.974872 kubelet[2667]: I1108 00:18:06.974810 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fa8ad855-d646-45e9-b3c6-57814e4b285a-kube-proxy\") pod \"kube-proxy-z2gs2\" (UID: \"fa8ad855-d646-45e9-b3c6-57814e4b285a\") " pod="kube-system/kube-proxy-z2gs2" Nov 8 00:18:06.974872 kubelet[2667]: I1108 00:18:06.974873 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa8ad855-d646-45e9-b3c6-57814e4b285a-xtables-lock\") pod \"kube-proxy-z2gs2\" (UID: \"fa8ad855-d646-45e9-b3c6-57814e4b285a\") " pod="kube-system/kube-proxy-z2gs2" Nov 8 00:18:06.975368 kubelet[2667]: I1108 00:18:06.974912 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wfr8\" (UniqueName: \"kubernetes.io/projected/fa8ad855-d646-45e9-b3c6-57814e4b285a-kube-api-access-2wfr8\") pod \"kube-proxy-z2gs2\" (UID: \"fa8ad855-d646-45e9-b3c6-57814e4b285a\") " pod="kube-system/kube-proxy-z2gs2" Nov 8 00:18:06.975368 kubelet[2667]: I1108 00:18:06.974992 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa8ad855-d646-45e9-b3c6-57814e4b285a-lib-modules\") pod \"kube-proxy-z2gs2\" (UID: \"fa8ad855-d646-45e9-b3c6-57814e4b285a\") " pod="kube-system/kube-proxy-z2gs2" Nov 8 00:18:07.231185 kubelet[2667]: E1108 00:18:07.231023 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:07.231978 containerd[1594]: time="2025-11-08T00:18:07.231913613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z2gs2,Uid:fa8ad855-d646-45e9-b3c6-57814e4b285a,Namespace:kube-system,Attempt:0,}" Nov 8 00:18:07.260125 containerd[1594]: time="2025-11-08T00:18:07.257724869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:18:07.260125 containerd[1594]: time="2025-11-08T00:18:07.258817230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:18:07.260125 containerd[1594]: time="2025-11-08T00:18:07.258837814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:07.260125 containerd[1594]: time="2025-11-08T00:18:07.259091764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:07.262594 kubelet[2667]: E1108 00:18:07.260990 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:07.310018 containerd[1594]: time="2025-11-08T00:18:07.309944916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z2gs2,Uid:fa8ad855-d646-45e9-b3c6-57814e4b285a,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa8cfc8e4606520713f209e9a33fdb14929a2f297096267399cac735dd2c2de0\"" Nov 8 00:18:07.312033 kubelet[2667]: E1108 00:18:07.310963 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:07.314883 containerd[1594]: time="2025-11-08T00:18:07.313422211Z" level=info msg="CreateContainer within sandbox \"aa8cfc8e4606520713f209e9a33fdb14929a2f297096267399cac735dd2c2de0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:18:07.344517 containerd[1594]: time="2025-11-08T00:18:07.344372001Z" level=info msg="CreateContainer within sandbox \"aa8cfc8e4606520713f209e9a33fdb14929a2f297096267399cac735dd2c2de0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b653682d88b47edf974e22b785a21397477e8b6c8236a05a7df1047d0aa13116\"" Nov 8 00:18:07.345312 containerd[1594]: time="2025-11-08T00:18:07.345283626Z" level=info msg="StartContainer for \"b653682d88b47edf974e22b785a21397477e8b6c8236a05a7df1047d0aa13116\"" Nov 8 00:18:07.382914 kubelet[2667]: E1108 00:18:07.377479 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:07.429946 containerd[1594]: time="2025-11-08T00:18:07.429902502Z" level=info msg="StartContainer for \"b653682d88b47edf974e22b785a21397477e8b6c8236a05a7df1047d0aa13116\" returns successfully" Nov 8 00:18:07.477479 kubelet[2667]: I1108 00:18:07.477434 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/229a9917-01bb-408d-b66e-381d7c9fc8ca-var-lib-calico\") pod \"tigera-operator-7dcd859c48-7gxxg\" (UID: \"229a9917-01bb-408d-b66e-381d7c9fc8ca\") " pod="tigera-operator/tigera-operator-7dcd859c48-7gxxg" Nov 8 00:18:07.477479 kubelet[2667]: I1108 00:18:07.477475 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5747\" (UniqueName: \"kubernetes.io/projected/229a9917-01bb-408d-b66e-381d7c9fc8ca-kube-api-access-n5747\") pod \"tigera-operator-7dcd859c48-7gxxg\" (UID: \"229a9917-01bb-408d-b66e-381d7c9fc8ca\") " pod="tigera-operator/tigera-operator-7dcd859c48-7gxxg" Nov 8 00:18:07.691361 containerd[1594]: time="2025-11-08T00:18:07.691231922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-7gxxg,Uid:229a9917-01bb-408d-b66e-381d7c9fc8ca,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:18:07.716783 containerd[1594]: time="2025-11-08T00:18:07.716669807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:18:07.716783 containerd[1594]: time="2025-11-08T00:18:07.716727136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:18:07.716783 containerd[1594]: time="2025-11-08T00:18:07.716757851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:07.716989 containerd[1594]: time="2025-11-08T00:18:07.716894656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:07.782746 containerd[1594]: time="2025-11-08T00:18:07.782693754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-7gxxg,Uid:229a9917-01bb-408d-b66e-381d7c9fc8ca,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b62960294cbd2f45792cb86df10a2b76a43cf94553afa125ea4d30cdd723cbcc\"" Nov 8 00:18:07.784748 containerd[1594]: time="2025-11-08T00:18:07.784546364Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:18:08.380822 kubelet[2667]: E1108 00:18:08.380446 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:08.390965 kubelet[2667]: I1108 00:18:08.390816 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z2gs2" podStartSLOduration=2.390786439 podStartE2EDuration="2.390786439s" podCreationTimestamp="2025-11-08 00:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:18:08.390530257 +0000 UTC m=+7.114922539" watchObservedRunningTime="2025-11-08 00:18:08.390786439 +0000 UTC m=+7.115178721" Nov 8 00:18:09.116920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088416216.mount: Deactivated successfully. Nov 8 00:18:09.383107 kubelet[2667]: E1108 00:18:09.382963 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:09.451079 containerd[1594]: time="2025-11-08T00:18:09.451009012Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:18:09.451759 containerd[1594]: time="2025-11-08T00:18:09.451692966Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:18:09.452829 containerd[1594]: time="2025-11-08T00:18:09.452786928Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:18:09.455213 containerd[1594]: time="2025-11-08T00:18:09.455176898Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:18:09.455759 containerd[1594]: time="2025-11-08T00:18:09.455723399Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.671148193s" Nov 8 00:18:09.455759 containerd[1594]: time="2025-11-08T00:18:09.455752268Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:18:09.457663 containerd[1594]: time="2025-11-08T00:18:09.457607985Z" level=info msg="CreateContainer within sandbox \"b62960294cbd2f45792cb86df10a2b76a43cf94553afa125ea4d30cdd723cbcc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:18:09.472771 containerd[1594]: time="2025-11-08T00:18:09.472730387Z" level=info msg="CreateContainer within sandbox \"b62960294cbd2f45792cb86df10a2b76a43cf94553afa125ea4d30cdd723cbcc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"39030e5065b629b95278a110a16583ae752a687f1db83d4adb292c377bbbfbbc\"" Nov 8 00:18:09.473230 containerd[1594]: time="2025-11-08T00:18:09.473200128Z" level=info msg="StartContainer for \"39030e5065b629b95278a110a16583ae752a687f1db83d4adb292c377bbbfbbc\"" Nov 8 00:18:09.532072 containerd[1594]: time="2025-11-08T00:18:09.532027752Z" level=info msg="StartContainer for \"39030e5065b629b95278a110a16583ae752a687f1db83d4adb292c377bbbfbbc\" returns successfully" Nov 8 00:18:10.395204 kubelet[2667]: I1108 00:18:10.395067 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-7gxxg" podStartSLOduration=1.7225467829999999 podStartE2EDuration="3.395042636s" podCreationTimestamp="2025-11-08 00:18:07 +0000 UTC" firstStartedPulling="2025-11-08 00:18:07.783957504 +0000 UTC m=+6.508349786" lastFinishedPulling="2025-11-08 00:18:09.456453357 +0000 UTC m=+8.180845639" observedRunningTime="2025-11-08 00:18:10.394331503 +0000 UTC m=+9.118723805" watchObservedRunningTime="2025-11-08 00:18:10.395042636 +0000 UTC m=+9.119434918" Nov 8 00:18:12.882161 kubelet[2667]: E1108 00:18:12.882122 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:13.570058 update_engine[1577]: I20251108 00:18:13.569974 1577 update_attempter.cc:509] Updating boot flags... Nov 8 00:18:13.599923 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (3048) Nov 8 00:18:13.637742 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (3050) Nov 8 00:18:13.655247 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (3050) Nov 8 00:18:15.281228 sudo[1784]: pam_unix(sudo:session): session closed for user root Nov 8 00:18:15.289038 sshd[1777]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:15.293717 systemd-logind[1568]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:18:15.297142 systemd[1]: sshd@6-10.0.0.26:22-10.0.0.1:35376.service: Deactivated successfully. Nov 8 00:18:15.303707 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:18:15.307776 systemd-logind[1568]: Removed session 7. Nov 8 00:18:15.676030 kubelet[2667]: E1108 00:18:15.675650 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:16.399537 kubelet[2667]: E1108 00:18:16.399488 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:19.453508 kubelet[2667]: I1108 00:18:19.453434 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3517d5b5-8846-4c66-9be9-406d432577bd-tigera-ca-bundle\") pod \"calico-typha-7f869bb954-qmrz6\" (UID: \"3517d5b5-8846-4c66-9be9-406d432577bd\") " pod="calico-system/calico-typha-7f869bb954-qmrz6" Nov 8 00:18:19.453508 kubelet[2667]: I1108 00:18:19.453495 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3517d5b5-8846-4c66-9be9-406d432577bd-typha-certs\") pod \"calico-typha-7f869bb954-qmrz6\" (UID: \"3517d5b5-8846-4c66-9be9-406d432577bd\") " pod="calico-system/calico-typha-7f869bb954-qmrz6" Nov 8 00:18:19.453508 kubelet[2667]: I1108 00:18:19.453519 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wctzq\" (UniqueName: \"kubernetes.io/projected/3517d5b5-8846-4c66-9be9-406d432577bd-kube-api-access-wctzq\") pod \"calico-typha-7f869bb954-qmrz6\" (UID: \"3517d5b5-8846-4c66-9be9-406d432577bd\") " pod="calico-system/calico-typha-7f869bb954-qmrz6" Nov 8 00:18:19.654210 kubelet[2667]: I1108 00:18:19.654142 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bbf8d7e7-5540-4856-bf3d-ce66f535159b-var-lib-calico\") pod \"calico-node-fw7ts\" (UID: \"bbf8d7e7-5540-4856-bf3d-ce66f535159b\") " pod="calico-system/calico-node-fw7ts" Nov 8 00:18:19.654210 kubelet[2667]: I1108 00:18:19.654192 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bbf8d7e7-5540-4856-bf3d-ce66f535159b-flexvol-driver-host\") pod \"calico-node-fw7ts\" (UID: \"bbf8d7e7-5540-4856-bf3d-ce66f535159b\") " pod="calico-system/calico-node-fw7ts" Nov 8 00:18:19.654210 kubelet[2667]: I1108 00:18:19.654210 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbf8d7e7-5540-4856-bf3d-ce66f535159b-lib-modules\") pod \"calico-node-fw7ts\" (UID: \"bbf8d7e7-5540-4856-bf3d-ce66f535159b\") " pod="calico-system/calico-node-fw7ts" Nov 8 00:18:19.654431 kubelet[2667]: I1108 00:18:19.654276 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bbf8d7e7-5540-4856-bf3d-ce66f535159b-cni-log-dir\") pod \"calico-node-fw7ts\" (UID: \"bbf8d7e7-5540-4856-bf3d-ce66f535159b\") " pod="calico-system/calico-node-fw7ts" Nov 8 00:18:19.654431 kubelet[2667]: I1108 00:18:19.654322 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bbf8d7e7-5540-4856-bf3d-ce66f535159b-var-run-calico\") pod \"calico-node-fw7ts\" (UID: \"bbf8d7e7-5540-4856-bf3d-ce66f535159b\") " pod="calico-system/calico-node-fw7ts" Nov 8 00:18:19.654431 kubelet[2667]: I1108 00:18:19.654338 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbf8d7e7-5540-4856-bf3d-ce66f535159b-xtables-lock\") pod \"calico-node-fw7ts\" (UID: \"bbf8d7e7-5540-4856-bf3d-ce66f535159b\") " pod="calico-system/calico-node-fw7ts" Nov 8 00:18:19.654431 kubelet[2667]: I1108 00:18:19.654355 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbf8d7e7-5540-4856-bf3d-ce66f535159b-tigera-ca-bundle\") pod \"calico-node-fw7ts\" (UID: \"bbf8d7e7-5540-4856-bf3d-ce66f535159b\") " pod="calico-system/calico-node-fw7ts" Nov 8 00:18:19.654431 kubelet[2667]: I1108 00:18:19.654371 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bbf8d7e7-5540-4856-bf3d-ce66f535159b-cni-bin-dir\") pod \"calico-node-fw7ts\" (UID: \"bbf8d7e7-5540-4856-bf3d-ce66f535159b\") " pod="calico-system/calico-node-fw7ts" Nov 8 00:18:19.654577 kubelet[2667]: I1108 00:18:19.654392 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr2s4\" (UniqueName: \"kubernetes.io/projected/bbf8d7e7-5540-4856-bf3d-ce66f535159b-kube-api-access-rr2s4\") pod \"calico-node-fw7ts\" (UID: \"bbf8d7e7-5540-4856-bf3d-ce66f535159b\") " pod="calico-system/calico-node-fw7ts" Nov 8 00:18:19.654577 kubelet[2667]: I1108 00:18:19.654414 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bbf8d7e7-5540-4856-bf3d-ce66f535159b-cni-net-dir\") pod \"calico-node-fw7ts\" (UID: \"bbf8d7e7-5540-4856-bf3d-ce66f535159b\") " pod="calico-system/calico-node-fw7ts" Nov 8 00:18:19.654577 kubelet[2667]: I1108 00:18:19.654452 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bbf8d7e7-5540-4856-bf3d-ce66f535159b-node-certs\") pod \"calico-node-fw7ts\" (UID: \"bbf8d7e7-5540-4856-bf3d-ce66f535159b\") " pod="calico-system/calico-node-fw7ts" Nov 8 00:18:19.654577 kubelet[2667]: I1108 00:18:19.654471 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bbf8d7e7-5540-4856-bf3d-ce66f535159b-policysync\") pod \"calico-node-fw7ts\" (UID: \"bbf8d7e7-5540-4856-bf3d-ce66f535159b\") " pod="calico-system/calico-node-fw7ts" Nov 8 00:18:19.677417 kubelet[2667]: E1108 00:18:19.677382 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:19.678035 containerd[1594]: time="2025-11-08T00:18:19.677969557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f869bb954-qmrz6,Uid:3517d5b5-8846-4c66-9be9-406d432577bd,Namespace:calico-system,Attempt:0,}" Nov 8 00:18:19.706072 containerd[1594]: time="2025-11-08T00:18:19.705873332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:18:19.706072 containerd[1594]: time="2025-11-08T00:18:19.705928793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:18:19.706072 containerd[1594]: time="2025-11-08T00:18:19.705938913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:19.706239 containerd[1594]: time="2025-11-08T00:18:19.706034704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:19.749002 kubelet[2667]: E1108 00:18:19.748942 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7fgw" podUID="9c633ef5-243d-451b-9c89-0f760540ce13" Nov 8 00:18:19.763900 kubelet[2667]: E1108 00:18:19.761320 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.763900 kubelet[2667]: W1108 00:18:19.761345 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.763900 kubelet[2667]: E1108 00:18:19.761548 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.763900 kubelet[2667]: E1108 00:18:19.761721 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.763900 kubelet[2667]: W1108 00:18:19.761757 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.763900 kubelet[2667]: E1108 00:18:19.761780 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.763900 kubelet[2667]: E1108 00:18:19.762123 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.763900 kubelet[2667]: W1108 00:18:19.762134 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.763900 kubelet[2667]: E1108 00:18:19.762334 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.763900 kubelet[2667]: E1108 00:18:19.762465 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.764366 kubelet[2667]: W1108 00:18:19.762483 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.764366 kubelet[2667]: E1108 00:18:19.762613 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.764366 kubelet[2667]: E1108 00:18:19.763829 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.764366 kubelet[2667]: W1108 00:18:19.763841 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.764366 kubelet[2667]: E1108 00:18:19.763944 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.764366 kubelet[2667]: E1108 00:18:19.764227 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.764366 kubelet[2667]: W1108 00:18:19.764239 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.764366 kubelet[2667]: E1108 00:18:19.764304 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.764649 kubelet[2667]: E1108 00:18:19.764638 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.765612 kubelet[2667]: W1108 00:18:19.764804 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.765612 kubelet[2667]: E1108 00:18:19.764940 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.765612 kubelet[2667]: E1108 00:18:19.765056 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.765612 kubelet[2667]: W1108 00:18:19.765064 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.765612 kubelet[2667]: E1108 00:18:19.765151 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.766553 kubelet[2667]: E1108 00:18:19.765862 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.766553 kubelet[2667]: W1108 00:18:19.765891 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.767783 kubelet[2667]: E1108 00:18:19.767063 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.769160 kubelet[2667]: E1108 00:18:19.769127 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.769160 kubelet[2667]: W1108 00:18:19.769143 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.769253 kubelet[2667]: E1108 00:18:19.769172 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.771047 kubelet[2667]: E1108 00:18:19.771021 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.771047 kubelet[2667]: W1108 00:18:19.771042 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.771153 kubelet[2667]: E1108 00:18:19.771068 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.772286 kubelet[2667]: E1108 00:18:19.772251 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.772337 kubelet[2667]: W1108 00:18:19.772290 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.772741 kubelet[2667]: E1108 00:18:19.772723 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.772996 kubelet[2667]: W1108 00:18:19.772888 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.772996 kubelet[2667]: E1108 00:18:19.772925 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.772996 kubelet[2667]: E1108 00:18:19.772974 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.778402 kubelet[2667]: E1108 00:18:19.777815 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.778402 kubelet[2667]: W1108 00:18:19.777838 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.778402 kubelet[2667]: E1108 00:18:19.777879 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.787142 containerd[1594]: time="2025-11-08T00:18:19.787091064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f869bb954-qmrz6,Uid:3517d5b5-8846-4c66-9be9-406d432577bd,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c143bf062881ffbe63f7048e16f327513455e54790cd49e1e421de6cde9a25b\"" Nov 8 00:18:19.787983 kubelet[2667]: E1108 00:18:19.787932 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:19.791951 containerd[1594]: time="2025-11-08T00:18:19.791901515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:18:19.841924 kubelet[2667]: E1108 00:18:19.841867 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.841924 kubelet[2667]: W1108 00:18:19.841903 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.841924 kubelet[2667]: E1108 00:18:19.841938 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.842966 kubelet[2667]: E1108 00:18:19.842931 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.842966 kubelet[2667]: W1108 00:18:19.842954 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.842966 kubelet[2667]: E1108 00:18:19.842969 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.843441 kubelet[2667]: E1108 00:18:19.843384 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.843441 kubelet[2667]: W1108 00:18:19.843423 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.843622 kubelet[2667]: E1108 00:18:19.843473 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.844024 kubelet[2667]: E1108 00:18:19.843997 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.844024 kubelet[2667]: W1108 00:18:19.844019 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.844105 kubelet[2667]: E1108 00:18:19.844033 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.844554 kubelet[2667]: E1108 00:18:19.844380 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.844554 kubelet[2667]: W1108 00:18:19.844532 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.844554 kubelet[2667]: E1108 00:18:19.844546 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.845540 kubelet[2667]: E1108 00:18:19.845512 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.845540 kubelet[2667]: W1108 00:18:19.845530 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.845540 kubelet[2667]: E1108 00:18:19.845543 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.846186 kubelet[2667]: E1108 00:18:19.845997 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.846186 kubelet[2667]: W1108 00:18:19.846018 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.846186 kubelet[2667]: E1108 00:18:19.846070 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.849749 kubelet[2667]: E1108 00:18:19.849716 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.849812 kubelet[2667]: W1108 00:18:19.849749 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.849812 kubelet[2667]: E1108 00:18:19.849779 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.850268 kubelet[2667]: E1108 00:18:19.850218 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.850268 kubelet[2667]: W1108 00:18:19.850234 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.850268 kubelet[2667]: E1108 00:18:19.850248 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.851293 kubelet[2667]: E1108 00:18:19.851257 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.851293 kubelet[2667]: W1108 00:18:19.851279 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.851293 kubelet[2667]: E1108 00:18:19.851292 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.851993 kubelet[2667]: E1108 00:18:19.851971 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.851993 kubelet[2667]: W1108 00:18:19.851991 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.852082 kubelet[2667]: E1108 00:18:19.852006 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.852343 kubelet[2667]: E1108 00:18:19.852323 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.852343 kubelet[2667]: W1108 00:18:19.852340 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.852442 kubelet[2667]: E1108 00:18:19.852352 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.852751 kubelet[2667]: E1108 00:18:19.852730 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.852751 kubelet[2667]: W1108 00:18:19.852748 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.852818 kubelet[2667]: E1108 00:18:19.852761 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.853256 kubelet[2667]: E1108 00:18:19.853231 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.853256 kubelet[2667]: W1108 00:18:19.853255 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.853371 kubelet[2667]: E1108 00:18:19.853269 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.853582 kubelet[2667]: E1108 00:18:19.853546 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.853582 kubelet[2667]: W1108 00:18:19.853564 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.853582 kubelet[2667]: E1108 00:18:19.853577 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.853947 kubelet[2667]: E1108 00:18:19.853913 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.853947 kubelet[2667]: W1108 00:18:19.853928 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.853947 kubelet[2667]: E1108 00:18:19.853941 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.854342 kubelet[2667]: E1108 00:18:19.854314 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.854342 kubelet[2667]: W1108 00:18:19.854340 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.854342 kubelet[2667]: E1108 00:18:19.854362 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.854840 kubelet[2667]: E1108 00:18:19.854821 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.854892 kubelet[2667]: W1108 00:18:19.854838 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.854928 kubelet[2667]: E1108 00:18:19.854889 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.855171 kubelet[2667]: E1108 00:18:19.855153 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.855171 kubelet[2667]: W1108 00:18:19.855169 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.855260 kubelet[2667]: E1108 00:18:19.855182 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.855498 kubelet[2667]: E1108 00:18:19.855479 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.855498 kubelet[2667]: W1108 00:18:19.855495 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.855611 kubelet[2667]: E1108 00:18:19.855508 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.855744 kubelet[2667]: E1108 00:18:19.855703 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:19.856415 kubelet[2667]: E1108 00:18:19.856283 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.856473 containerd[1594]: time="2025-11-08T00:18:19.856415438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fw7ts,Uid:bbf8d7e7-5540-4856-bf3d-ce66f535159b,Namespace:calico-system,Attempt:0,}" Nov 8 00:18:19.856683 kubelet[2667]: W1108 00:18:19.856534 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.856683 kubelet[2667]: E1108 00:18:19.856557 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.856981 kubelet[2667]: I1108 00:18:19.856906 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9c633ef5-243d-451b-9c89-0f760540ce13-varrun\") pod \"csi-node-driver-s7fgw\" (UID: \"9c633ef5-243d-451b-9c89-0f760540ce13\") " pod="calico-system/csi-node-driver-s7fgw" Nov 8 00:18:19.857740 kubelet[2667]: E1108 00:18:19.857581 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.857740 kubelet[2667]: W1108 00:18:19.857596 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.857740 kubelet[2667]: E1108 00:18:19.857618 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.858042 kubelet[2667]: E1108 00:18:19.858010 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.858042 kubelet[2667]: W1108 00:18:19.858025 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.858042 kubelet[2667]: E1108 00:18:19.858054 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.858516 kubelet[2667]: E1108 00:18:19.858495 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.858516 kubelet[2667]: W1108 00:18:19.858511 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.858632 kubelet[2667]: E1108 00:18:19.858526 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.858632 kubelet[2667]: I1108 00:18:19.858563 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9c633ef5-243d-451b-9c89-0f760540ce13-socket-dir\") pod \"csi-node-driver-s7fgw\" (UID: \"9c633ef5-243d-451b-9c89-0f760540ce13\") " pod="calico-system/csi-node-driver-s7fgw" Nov 8 00:18:19.859031 kubelet[2667]: E1108 00:18:19.858995 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.859031 kubelet[2667]: W1108 00:18:19.859026 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.859155 kubelet[2667]: E1108 00:18:19.859056 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.859155 kubelet[2667]: I1108 00:18:19.859105 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6hbx\" (UniqueName: \"kubernetes.io/projected/9c633ef5-243d-451b-9c89-0f760540ce13-kube-api-access-x6hbx\") pod \"csi-node-driver-s7fgw\" (UID: \"9c633ef5-243d-451b-9c89-0f760540ce13\") " pod="calico-system/csi-node-driver-s7fgw" Nov 8 00:18:19.859421 kubelet[2667]: E1108 00:18:19.859397 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.859477 kubelet[2667]: W1108 00:18:19.859421 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.859477 kubelet[2667]: E1108 00:18:19.859464 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.859761 kubelet[2667]: E1108 00:18:19.859744 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.859761 kubelet[2667]: W1108 00:18:19.859757 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.859895 kubelet[2667]: E1108 00:18:19.859776 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.860120 kubelet[2667]: E1108 00:18:19.860099 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.860120 kubelet[2667]: W1108 00:18:19.860116 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.860282 kubelet[2667]: E1108 00:18:19.860155 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.860282 kubelet[2667]: I1108 00:18:19.860180 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9c633ef5-243d-451b-9c89-0f760540ce13-registration-dir\") pod \"csi-node-driver-s7fgw\" (UID: \"9c633ef5-243d-451b-9c89-0f760540ce13\") " pod="calico-system/csi-node-driver-s7fgw" Nov 8 00:18:19.861833 kubelet[2667]: E1108 00:18:19.860652 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.861833 kubelet[2667]: W1108 00:18:19.860667 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.861833 kubelet[2667]: E1108 00:18:19.860685 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.862827 kubelet[2667]: E1108 00:18:19.862784 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.862827 kubelet[2667]: W1108 00:18:19.862806 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.862937 kubelet[2667]: E1108 00:18:19.862870 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.863552 kubelet[2667]: E1108 00:18:19.863512 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.863552 kubelet[2667]: W1108 00:18:19.863531 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.863639 kubelet[2667]: E1108 00:18:19.863606 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.863639 kubelet[2667]: I1108 00:18:19.863627 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c633ef5-243d-451b-9c89-0f760540ce13-kubelet-dir\") pod \"csi-node-driver-s7fgw\" (UID: \"9c633ef5-243d-451b-9c89-0f760540ce13\") " pod="calico-system/csi-node-driver-s7fgw" Nov 8 00:18:19.863954 kubelet[2667]: E1108 00:18:19.863918 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.863954 kubelet[2667]: W1108 00:18:19.863933 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.863954 kubelet[2667]: E1108 00:18:19.863950 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.864202 kubelet[2667]: E1108 00:18:19.864182 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.864202 kubelet[2667]: W1108 00:18:19.864195 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.864281 kubelet[2667]: E1108 00:18:19.864207 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.864924 kubelet[2667]: E1108 00:18:19.864751 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.864924 kubelet[2667]: W1108 00:18:19.864769 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.864924 kubelet[2667]: E1108 00:18:19.864782 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.865903 kubelet[2667]: E1108 00:18:19.865193 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.865903 kubelet[2667]: W1108 00:18:19.865205 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.865903 kubelet[2667]: E1108 00:18:19.865217 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.889169 containerd[1594]: time="2025-11-08T00:18:19.888993944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:18:19.889169 containerd[1594]: time="2025-11-08T00:18:19.889096699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:18:19.889169 containerd[1594]: time="2025-11-08T00:18:19.889123432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:19.889429 containerd[1594]: time="2025-11-08T00:18:19.889256757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:19.942068 containerd[1594]: time="2025-11-08T00:18:19.942005207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fw7ts,Uid:bbf8d7e7-5540-4856-bf3d-ce66f535159b,Namespace:calico-system,Attempt:0,} returns sandbox id \"1f626acb3497a72829b040b79fad224aa1c4df9c60c16115aecda4238dd0085c\"" Nov 8 00:18:19.943254 kubelet[2667]: E1108 00:18:19.943199 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:19.964436 kubelet[2667]: E1108 00:18:19.964378 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.964436 kubelet[2667]: W1108 00:18:19.964408 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.964436 kubelet[2667]: E1108 00:18:19.964435 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.964759 kubelet[2667]: E1108 00:18:19.964739 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.964759 kubelet[2667]: W1108 00:18:19.964753 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.964814 kubelet[2667]: E1108 00:18:19.964769 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.965049 kubelet[2667]: E1108 00:18:19.965021 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.965049 kubelet[2667]: W1108 00:18:19.965034 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.965049 kubelet[2667]: E1108 00:18:19.965049 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.965399 kubelet[2667]: E1108 00:18:19.965356 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.965461 kubelet[2667]: W1108 00:18:19.965398 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.965461 kubelet[2667]: E1108 00:18:19.965436 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.965777 kubelet[2667]: E1108 00:18:19.965744 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.965777 kubelet[2667]: W1108 00:18:19.965775 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.965840 kubelet[2667]: E1108 00:18:19.965804 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.966139 kubelet[2667]: E1108 00:18:19.966111 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.966139 kubelet[2667]: W1108 00:18:19.966126 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.966360 kubelet[2667]: E1108 00:18:19.966241 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.966410 kubelet[2667]: E1108 00:18:19.966384 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.966410 kubelet[2667]: W1108 00:18:19.966400 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.966542 kubelet[2667]: E1108 00:18:19.966478 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.966696 kubelet[2667]: E1108 00:18:19.966678 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.966723 kubelet[2667]: W1108 00:18:19.966692 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.966786 kubelet[2667]: E1108 00:18:19.966763 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.967052 kubelet[2667]: E1108 00:18:19.967022 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.967052 kubelet[2667]: W1108 00:18:19.967036 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.967143 kubelet[2667]: E1108 00:18:19.967114 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.967303 kubelet[2667]: E1108 00:18:19.967286 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.967303 kubelet[2667]: W1108 00:18:19.967299 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.967372 kubelet[2667]: E1108 00:18:19.967333 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.967685 kubelet[2667]: E1108 00:18:19.967657 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.967685 kubelet[2667]: W1108 00:18:19.967680 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.968262 kubelet[2667]: E1108 00:18:19.967771 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.968262 kubelet[2667]: E1108 00:18:19.968015 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.968262 kubelet[2667]: W1108 00:18:19.968029 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.968262 kubelet[2667]: E1108 00:18:19.968151 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.968422 kubelet[2667]: E1108 00:18:19.968403 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.968422 kubelet[2667]: W1108 00:18:19.968415 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.968514 kubelet[2667]: E1108 00:18:19.968451 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.968679 kubelet[2667]: E1108 00:18:19.968655 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.968679 kubelet[2667]: W1108 00:18:19.968672 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.968745 kubelet[2667]: E1108 00:18:19.968699 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.968926 kubelet[2667]: E1108 00:18:19.968908 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.968926 kubelet[2667]: W1108 00:18:19.968921 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.969005 kubelet[2667]: E1108 00:18:19.968957 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.969179 kubelet[2667]: E1108 00:18:19.969163 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.969179 kubelet[2667]: W1108 00:18:19.969175 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.969238 kubelet[2667]: E1108 00:18:19.969210 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.969494 kubelet[2667]: E1108 00:18:19.969460 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.969555 kubelet[2667]: W1108 00:18:19.969492 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.969555 kubelet[2667]: E1108 00:18:19.969547 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.969828 kubelet[2667]: E1108 00:18:19.969811 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.969828 kubelet[2667]: W1108 00:18:19.969824 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.969920 kubelet[2667]: E1108 00:18:19.969893 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.970088 kubelet[2667]: E1108 00:18:19.970072 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.970088 kubelet[2667]: W1108 00:18:19.970084 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.970165 kubelet[2667]: E1108 00:18:19.970111 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.970302 kubelet[2667]: E1108 00:18:19.970285 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.970302 kubelet[2667]: W1108 00:18:19.970297 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.970379 kubelet[2667]: E1108 00:18:19.970336 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.970593 kubelet[2667]: E1108 00:18:19.970574 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.970593 kubelet[2667]: W1108 00:18:19.970587 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.970666 kubelet[2667]: E1108 00:18:19.970602 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.970987 kubelet[2667]: E1108 00:18:19.970970 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.971030 kubelet[2667]: W1108 00:18:19.970984 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.971030 kubelet[2667]: E1108 00:18:19.971010 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.972727 kubelet[2667]: E1108 00:18:19.972708 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.972727 kubelet[2667]: W1108 00:18:19.972723 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.972805 kubelet[2667]: E1108 00:18:19.972749 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.973162 kubelet[2667]: E1108 00:18:19.973145 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.973162 kubelet[2667]: W1108 00:18:19.973158 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.973253 kubelet[2667]: E1108 00:18:19.973215 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.973595 kubelet[2667]: E1108 00:18:19.973560 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.974193 kubelet[2667]: W1108 00:18:19.974116 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.974193 kubelet[2667]: E1108 00:18:19.974141 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:19.981273 kubelet[2667]: E1108 00:18:19.981240 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:19.981273 kubelet[2667]: W1108 00:18:19.981258 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:19.981273 kubelet[2667]: E1108 00:18:19.981277 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:21.246942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2639840601.mount: Deactivated successfully. Nov 8 00:18:21.354358 kubelet[2667]: E1108 00:18:21.354304 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7fgw" podUID="9c633ef5-243d-451b-9c89-0f760540ce13" Nov 8 00:18:21.639218 containerd[1594]: time="2025-11-08T00:18:21.639083157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:18:21.639996 containerd[1594]: time="2025-11-08T00:18:21.639946728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:18:21.641310 containerd[1594]: time="2025-11-08T00:18:21.641266914Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:18:21.644266 containerd[1594]: time="2025-11-08T00:18:21.644220047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:18:21.644871 containerd[1594]: time="2025-11-08T00:18:21.644787942Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.852839653s" Nov 8 00:18:21.644871 containerd[1594]: time="2025-11-08T00:18:21.644821689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:18:21.653965 containerd[1594]: time="2025-11-08T00:18:21.653938876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:18:21.684392 containerd[1594]: time="2025-11-08T00:18:21.684346471Z" level=info msg="CreateContainer within sandbox \"4c143bf062881ffbe63f7048e16f327513455e54790cd49e1e421de6cde9a25b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:18:21.700602 containerd[1594]: time="2025-11-08T00:18:21.700549650Z" level=info msg="CreateContainer within sandbox \"4c143bf062881ffbe63f7048e16f327513455e54790cd49e1e421de6cde9a25b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9d06e06e1c37694191a7867691ecd1f2decf0c6a7f2c831384717cd4529cda28\"" Nov 8 00:18:21.703942 containerd[1594]: time="2025-11-08T00:18:21.703899378Z" level=info msg="StartContainer for \"9d06e06e1c37694191a7867691ecd1f2decf0c6a7f2c831384717cd4529cda28\"" Nov 8 00:18:21.789676 containerd[1594]: time="2025-11-08T00:18:21.789610283Z" level=info msg="StartContainer for \"9d06e06e1c37694191a7867691ecd1f2decf0c6a7f2c831384717cd4529cda28\" returns successfully" Nov 8 00:18:22.421724 kubelet[2667]: E1108 00:18:22.421666 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:22.474656 kubelet[2667]: E1108 00:18:22.474616 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.474656 kubelet[2667]: W1108 00:18:22.474641 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.474656 kubelet[2667]: E1108 00:18:22.474668 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.474958 kubelet[2667]: E1108 00:18:22.474942 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.474958 kubelet[2667]: W1108 00:18:22.474954 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.475020 kubelet[2667]: E1108 00:18:22.474963 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.475311 kubelet[2667]: E1108 00:18:22.475284 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.475311 kubelet[2667]: W1108 00:18:22.475297 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.475311 kubelet[2667]: E1108 00:18:22.475306 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.475619 kubelet[2667]: E1108 00:18:22.475598 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.475619 kubelet[2667]: W1108 00:18:22.475610 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.475619 kubelet[2667]: E1108 00:18:22.475618 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.475921 kubelet[2667]: E1108 00:18:22.475904 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.475921 kubelet[2667]: W1108 00:18:22.475916 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.475992 kubelet[2667]: E1108 00:18:22.475926 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.476167 kubelet[2667]: E1108 00:18:22.476143 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.476167 kubelet[2667]: W1108 00:18:22.476155 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.476167 kubelet[2667]: E1108 00:18:22.476162 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.476384 kubelet[2667]: E1108 00:18:22.476369 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.476384 kubelet[2667]: W1108 00:18:22.476380 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.476435 kubelet[2667]: E1108 00:18:22.476391 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.476753 kubelet[2667]: E1108 00:18:22.476706 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.476753 kubelet[2667]: W1108 00:18:22.476736 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.476962 kubelet[2667]: E1108 00:18:22.476774 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.477184 kubelet[2667]: E1108 00:18:22.477149 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.477184 kubelet[2667]: W1108 00:18:22.477164 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.477184 kubelet[2667]: E1108 00:18:22.477174 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.477436 kubelet[2667]: E1108 00:18:22.477405 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.477436 kubelet[2667]: W1108 00:18:22.477418 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.477436 kubelet[2667]: E1108 00:18:22.477428 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.477653 kubelet[2667]: E1108 00:18:22.477642 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.477653 kubelet[2667]: W1108 00:18:22.477650 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.477707 kubelet[2667]: E1108 00:18:22.477660 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.477964 kubelet[2667]: E1108 00:18:22.477947 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.477964 kubelet[2667]: W1108 00:18:22.477958 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.478040 kubelet[2667]: E1108 00:18:22.477968 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.478189 kubelet[2667]: E1108 00:18:22.478173 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.478189 kubelet[2667]: W1108 00:18:22.478184 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.478232 kubelet[2667]: E1108 00:18:22.478192 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.478404 kubelet[2667]: E1108 00:18:22.478389 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.478404 kubelet[2667]: W1108 00:18:22.478399 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.478454 kubelet[2667]: E1108 00:18:22.478407 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.478641 kubelet[2667]: E1108 00:18:22.478625 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.478641 kubelet[2667]: W1108 00:18:22.478636 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.478698 kubelet[2667]: E1108 00:18:22.478644 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.486114 kubelet[2667]: E1108 00:18:22.486075 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.486114 kubelet[2667]: W1108 00:18:22.486090 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.486114 kubelet[2667]: E1108 00:18:22.486101 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.486431 kubelet[2667]: E1108 00:18:22.486395 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.486431 kubelet[2667]: W1108 00:18:22.486408 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.486431 kubelet[2667]: E1108 00:18:22.486425 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.486770 kubelet[2667]: E1108 00:18:22.486730 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.486770 kubelet[2667]: W1108 00:18:22.486743 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.486770 kubelet[2667]: E1108 00:18:22.486758 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.487051 kubelet[2667]: E1108 00:18:22.487028 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.487051 kubelet[2667]: W1108 00:18:22.487040 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.487051 kubelet[2667]: E1108 00:18:22.487056 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.487330 kubelet[2667]: E1108 00:18:22.487293 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.487330 kubelet[2667]: W1108 00:18:22.487308 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.487330 kubelet[2667]: E1108 00:18:22.487324 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.487608 kubelet[2667]: E1108 00:18:22.487563 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.487608 kubelet[2667]: W1108 00:18:22.487572 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.487608 kubelet[2667]: E1108 00:18:22.487585 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.487877 kubelet[2667]: E1108 00:18:22.487842 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.487877 kubelet[2667]: W1108 00:18:22.487870 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.487969 kubelet[2667]: E1108 00:18:22.487914 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.488222 kubelet[2667]: E1108 00:18:22.488209 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.488255 kubelet[2667]: W1108 00:18:22.488221 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.488306 kubelet[2667]: E1108 00:18:22.488260 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.488462 kubelet[2667]: E1108 00:18:22.488439 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.488462 kubelet[2667]: W1108 00:18:22.488451 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.488573 kubelet[2667]: E1108 00:18:22.488508 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.488715 kubelet[2667]: E1108 00:18:22.488695 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.488715 kubelet[2667]: W1108 00:18:22.488706 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.488795 kubelet[2667]: E1108 00:18:22.488720 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.489194 kubelet[2667]: E1108 00:18:22.489168 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.489268 kubelet[2667]: W1108 00:18:22.489193 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.489268 kubelet[2667]: E1108 00:18:22.489223 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.489524 kubelet[2667]: E1108 00:18:22.489502 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.489559 kubelet[2667]: W1108 00:18:22.489526 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.489582 kubelet[2667]: E1108 00:18:22.489558 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.489922 kubelet[2667]: E1108 00:18:22.489900 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.489965 kubelet[2667]: W1108 00:18:22.489923 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.490046 kubelet[2667]: E1108 00:18:22.489998 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.490224 kubelet[2667]: E1108 00:18:22.490198 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.490262 kubelet[2667]: W1108 00:18:22.490224 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.490293 kubelet[2667]: E1108 00:18:22.490266 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.490519 kubelet[2667]: E1108 00:18:22.490497 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.490558 kubelet[2667]: W1108 00:18:22.490520 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.490582 kubelet[2667]: E1108 00:18:22.490552 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.491035 kubelet[2667]: E1108 00:18:22.491003 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.491035 kubelet[2667]: W1108 00:18:22.491027 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.491084 kubelet[2667]: E1108 00:18:22.491052 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.491418 kubelet[2667]: E1108 00:18:22.491393 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.491471 kubelet[2667]: W1108 00:18:22.491416 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.491471 kubelet[2667]: E1108 00:18:22.491449 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:22.491764 kubelet[2667]: E1108 00:18:22.491723 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:18:22.491764 kubelet[2667]: W1108 00:18:22.491744 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:18:22.491837 kubelet[2667]: E1108 00:18:22.491764 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:18:23.086722 containerd[1594]: time="2025-11-08T00:18:23.086612735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:18:23.088247 containerd[1594]: time="2025-11-08T00:18:23.088190368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:18:23.089806 containerd[1594]: time="2025-11-08T00:18:23.089737499Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:18:23.092238 containerd[1594]: time="2025-11-08T00:18:23.092192903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:18:23.092787 containerd[1594]: time="2025-11-08T00:18:23.092721827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.438683353s" Nov 8 00:18:23.092787 containerd[1594]: time="2025-11-08T00:18:23.092778098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:18:23.101474 containerd[1594]: time="2025-11-08T00:18:23.101404946Z" level=info msg="CreateContainer within sandbox \"1f626acb3497a72829b040b79fad224aa1c4df9c60c16115aecda4238dd0085c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:18:23.119431 containerd[1594]: time="2025-11-08T00:18:23.119365628Z" level=info msg="CreateContainer within sandbox \"1f626acb3497a72829b040b79fad224aa1c4df9c60c16115aecda4238dd0085c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"36465374bbc69ceedd6bda512d551a982751e4f454a93d01a34b5b796943f470\"" Nov 8 00:18:23.120073 containerd[1594]: time="2025-11-08T00:18:23.120021171Z" level=info msg="StartContainer for \"36465374bbc69ceedd6bda512d551a982751e4f454a93d01a34b5b796943f470\"" Nov 8 00:18:23.282564 containerd[1594]: time="2025-11-08T00:18:23.282432935Z" level=info msg="StartContainer for \"36465374bbc69ceedd6bda512d551a982751e4f454a93d01a34b5b796943f470\" returns successfully" Nov 8 00:18:23.388064 kubelet[2667]: E1108 00:18:23.353140 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7fgw" podUID="9c633ef5-243d-451b-9c89-0f760540ce13" Nov 8 00:18:23.422941 kubelet[2667]: I1108 00:18:23.422872 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:18:23.423922 kubelet[2667]: E1108 00:18:23.423894 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:23.423976 kubelet[2667]: E1108 00:18:23.423955 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:23.675311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36465374bbc69ceedd6bda512d551a982751e4f454a93d01a34b5b796943f470-rootfs.mount: Deactivated successfully. Nov 8 00:18:23.935923 containerd[1594]: time="2025-11-08T00:18:23.935720845Z" level=info msg="shim disconnected" id=36465374bbc69ceedd6bda512d551a982751e4f454a93d01a34b5b796943f470 namespace=k8s.io Nov 8 00:18:23.935923 containerd[1594]: time="2025-11-08T00:18:23.935803658Z" level=warning msg="cleaning up after shim disconnected" id=36465374bbc69ceedd6bda512d551a982751e4f454a93d01a34b5b796943f470 namespace=k8s.io Nov 8 00:18:23.935923 containerd[1594]: time="2025-11-08T00:18:23.935812756Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:18:24.056030 kubelet[2667]: I1108 00:18:24.055961 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f869bb954-qmrz6" podStartSLOduration=3.191011817 podStartE2EDuration="5.055939897s" podCreationTimestamp="2025-11-08 00:18:19 +0000 UTC" firstStartedPulling="2025-11-08 00:18:19.788787794 +0000 UTC m=+18.513180076" lastFinishedPulling="2025-11-08 00:18:21.653715874 +0000 UTC m=+20.378108156" observedRunningTime="2025-11-08 00:18:22.437648326 +0000 UTC m=+21.162040638" watchObservedRunningTime="2025-11-08 00:18:24.055939897 +0000 UTC m=+22.780332179" Nov 8 00:18:24.428025 kubelet[2667]: E1108 00:18:24.427989 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:24.428887 containerd[1594]: time="2025-11-08T00:18:24.428809977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:18:25.353309 kubelet[2667]: E1108 00:18:25.353226 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7fgw" podUID="9c633ef5-243d-451b-9c89-0f760540ce13" Nov 8 00:18:27.354723 kubelet[2667]: E1108 00:18:27.353441 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7fgw" podUID="9c633ef5-243d-451b-9c89-0f760540ce13" Nov 8 00:18:28.538579 containerd[1594]: time="2025-11-08T00:18:28.538506699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:18:28.539431 containerd[1594]: time="2025-11-08T00:18:28.539383603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:18:28.540599 containerd[1594]: time="2025-11-08T00:18:28.540560343Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:18:28.542966 containerd[1594]: time="2025-11-08T00:18:28.542921128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:18:28.543757 containerd[1594]: time="2025-11-08T00:18:28.543715622Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.114823112s" Nov 8 00:18:28.543757 containerd[1594]: time="2025-11-08T00:18:28.543748777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:18:28.546505 containerd[1594]: time="2025-11-08T00:18:28.546452612Z" level=info msg="CreateContainer within sandbox \"1f626acb3497a72829b040b79fad224aa1c4df9c60c16115aecda4238dd0085c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:18:28.563872 containerd[1594]: time="2025-11-08T00:18:28.563792210Z" level=info msg="CreateContainer within sandbox \"1f626acb3497a72829b040b79fad224aa1c4df9c60c16115aecda4238dd0085c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7aacd77992423165546608e69b997d539fe89ea049cff9210299bb2b04336170\"" Nov 8 00:18:28.564693 containerd[1594]: time="2025-11-08T00:18:28.564667593Z" level=info msg="StartContainer for \"7aacd77992423165546608e69b997d539fe89ea049cff9210299bb2b04336170\"" Nov 8 00:18:28.635069 containerd[1594]: time="2025-11-08T00:18:28.634983978Z" level=info msg="StartContainer for \"7aacd77992423165546608e69b997d539fe89ea049cff9210299bb2b04336170\" returns successfully" Nov 8 00:18:29.355321 kubelet[2667]: E1108 00:18:29.355270 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7fgw" podUID="9c633ef5-243d-451b-9c89-0f760540ce13" Nov 8 00:18:29.438506 kubelet[2667]: E1108 00:18:29.438455 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:29.854362 kubelet[2667]: I1108 00:18:29.854321 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:18:29.854789 kubelet[2667]: E1108 00:18:29.854764 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:30.440468 kubelet[2667]: E1108 00:18:30.440395 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:30.440969 kubelet[2667]: E1108 00:18:30.440690 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:31.242837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7aacd77992423165546608e69b997d539fe89ea049cff9210299bb2b04336170-rootfs.mount: Deactivated successfully. Nov 8 00:18:31.257899 kubelet[2667]: I1108 00:18:31.257838 2667 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:18:31.358451 containerd[1594]: time="2025-11-08T00:18:31.358391808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s7fgw,Uid:9c633ef5-243d-451b-9c89-0f760540ce13,Namespace:calico-system,Attempt:0,}" Nov 8 00:18:31.388676 containerd[1594]: time="2025-11-08T00:18:31.388580547Z" level=info msg="shim disconnected" id=7aacd77992423165546608e69b997d539fe89ea049cff9210299bb2b04336170 namespace=k8s.io Nov 8 00:18:31.388676 containerd[1594]: time="2025-11-08T00:18:31.388664200Z" level=warning msg="cleaning up after shim disconnected" id=7aacd77992423165546608e69b997d539fe89ea049cff9210299bb2b04336170 namespace=k8s.io Nov 8 00:18:31.388676 containerd[1594]: time="2025-11-08T00:18:31.388673769Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:18:31.877595 kubelet[2667]: I1108 00:18:31.877537 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmg9t\" (UniqueName: \"kubernetes.io/projected/54174585-9397-4869-81c3-ea42889b85ce-kube-api-access-wmg9t\") pod \"calico-kube-controllers-5497d898d6-c7j84\" (UID: \"54174585-9397-4869-81c3-ea42889b85ce\") " pod="calico-system/calico-kube-controllers-5497d898d6-c7j84" Nov 8 00:18:31.877595 kubelet[2667]: I1108 00:18:31.877582 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l645l\" (UniqueName: \"kubernetes.io/projected/83b4c340-630f-41a2-8c28-f5f9998eb1d0-kube-api-access-l645l\") pod \"coredns-668d6bf9bc-xkvq8\" (UID: \"83b4c340-630f-41a2-8c28-f5f9998eb1d0\") " pod="kube-system/coredns-668d6bf9bc-xkvq8" Nov 8 00:18:31.877595 kubelet[2667]: I1108 00:18:31.877601 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ec8164c-28b7-4eb6-afc2-8fbd6d62e774-config-volume\") pod \"coredns-668d6bf9bc-z9rcx\" (UID: \"5ec8164c-28b7-4eb6-afc2-8fbd6d62e774\") " pod="kube-system/coredns-668d6bf9bc-z9rcx" Nov 8 00:18:31.878323 kubelet[2667]: I1108 00:18:31.877618 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5b1873f4-422a-4e54-9bb8-81f47889b499-whisker-backend-key-pair\") pod \"whisker-8488696cbf-5xxwb\" (UID: \"5b1873f4-422a-4e54-9bb8-81f47889b499\") " pod="calico-system/whisker-8488696cbf-5xxwb" Nov 8 00:18:31.878323 kubelet[2667]: I1108 00:18:31.877636 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba4e3da5-1f7c-4476-a748-4d008501b030-config\") pod \"goldmane-666569f655-9dm45\" (UID: \"ba4e3da5-1f7c-4476-a748-4d008501b030\") " pod="calico-system/goldmane-666569f655-9dm45" Nov 8 00:18:31.878323 kubelet[2667]: I1108 00:18:31.877650 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vkbc\" (UniqueName: \"kubernetes.io/projected/0357a036-98a8-435c-9d85-9cc2bb4428b4-kube-api-access-5vkbc\") pod \"calico-apiserver-7b4d75b794-6dvvd\" (UID: \"0357a036-98a8-435c-9d85-9cc2bb4428b4\") " pod="calico-apiserver/calico-apiserver-7b4d75b794-6dvvd" Nov 8 00:18:31.878323 kubelet[2667]: I1108 00:18:31.877664 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ba4e3da5-1f7c-4476-a748-4d008501b030-goldmane-key-pair\") pod \"goldmane-666569f655-9dm45\" (UID: \"ba4e3da5-1f7c-4476-a748-4d008501b030\") " pod="calico-system/goldmane-666569f655-9dm45" Nov 8 00:18:31.878323 kubelet[2667]: I1108 00:18:31.877678 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba4e3da5-1f7c-4476-a748-4d008501b030-goldmane-ca-bundle\") pod \"goldmane-666569f655-9dm45\" (UID: \"ba4e3da5-1f7c-4476-a748-4d008501b030\") " pod="calico-system/goldmane-666569f655-9dm45" Nov 8 00:18:31.878449 kubelet[2667]: I1108 00:18:31.877696 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcvzj\" (UniqueName: \"kubernetes.io/projected/ba4e3da5-1f7c-4476-a748-4d008501b030-kube-api-access-qcvzj\") pod \"goldmane-666569f655-9dm45\" (UID: \"ba4e3da5-1f7c-4476-a748-4d008501b030\") " pod="calico-system/goldmane-666569f655-9dm45" Nov 8 00:18:31.878449 kubelet[2667]: I1108 00:18:31.877712 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b4c340-630f-41a2-8c28-f5f9998eb1d0-config-volume\") pod \"coredns-668d6bf9bc-xkvq8\" (UID: \"83b4c340-630f-41a2-8c28-f5f9998eb1d0\") " pod="kube-system/coredns-668d6bf9bc-xkvq8" Nov 8 00:18:31.878449 kubelet[2667]: I1108 00:18:31.877726 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0357a036-98a8-435c-9d85-9cc2bb4428b4-calico-apiserver-certs\") pod \"calico-apiserver-7b4d75b794-6dvvd\" (UID: \"0357a036-98a8-435c-9d85-9cc2bb4428b4\") " pod="calico-apiserver/calico-apiserver-7b4d75b794-6dvvd" Nov 8 00:18:31.878449 kubelet[2667]: I1108 00:18:31.877744 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw2vz\" (UniqueName: \"kubernetes.io/projected/5ec8164c-28b7-4eb6-afc2-8fbd6d62e774-kube-api-access-bw2vz\") pod \"coredns-668d6bf9bc-z9rcx\" (UID: \"5ec8164c-28b7-4eb6-afc2-8fbd6d62e774\") " pod="kube-system/coredns-668d6bf9bc-z9rcx" Nov 8 00:18:31.878449 kubelet[2667]: I1108 00:18:31.877758 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d-calico-apiserver-certs\") pod \"calico-apiserver-7b4d75b794-d277s\" (UID: \"4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d\") " pod="calico-apiserver/calico-apiserver-7b4d75b794-d277s" Nov 8 00:18:31.878584 kubelet[2667]: I1108 00:18:31.877771 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b1873f4-422a-4e54-9bb8-81f47889b499-whisker-ca-bundle\") pod \"whisker-8488696cbf-5xxwb\" (UID: \"5b1873f4-422a-4e54-9bb8-81f47889b499\") " pod="calico-system/whisker-8488696cbf-5xxwb" Nov 8 00:18:31.878584 kubelet[2667]: I1108 00:18:31.877786 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqtwl\" (UniqueName: \"kubernetes.io/projected/5b1873f4-422a-4e54-9bb8-81f47889b499-kube-api-access-sqtwl\") pod \"whisker-8488696cbf-5xxwb\" (UID: \"5b1873f4-422a-4e54-9bb8-81f47889b499\") " pod="calico-system/whisker-8488696cbf-5xxwb" Nov 8 00:18:31.878584 kubelet[2667]: I1108 00:18:31.877803 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54174585-9397-4869-81c3-ea42889b85ce-tigera-ca-bundle\") pod \"calico-kube-controllers-5497d898d6-c7j84\" (UID: \"54174585-9397-4869-81c3-ea42889b85ce\") " pod="calico-system/calico-kube-controllers-5497d898d6-c7j84" Nov 8 00:18:31.878584 kubelet[2667]: I1108 00:18:31.877818 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98lh5\" (UniqueName: \"kubernetes.io/projected/4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d-kube-api-access-98lh5\") pod \"calico-apiserver-7b4d75b794-d277s\" (UID: \"4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d\") " pod="calico-apiserver/calico-apiserver-7b4d75b794-d277s" Nov 8 00:18:31.928711 containerd[1594]: time="2025-11-08T00:18:31.928630391Z" level=error msg="Failed to destroy network for sandbox \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:31.931287 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795-shm.mount: Deactivated successfully. Nov 8 00:18:31.933492 containerd[1594]: time="2025-11-08T00:18:31.933451195Z" level=error msg="encountered an error cleaning up failed sandbox \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:31.933556 containerd[1594]: time="2025-11-08T00:18:31.933528075Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s7fgw,Uid:9c633ef5-243d-451b-9c89-0f760540ce13,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:31.946173 kubelet[2667]: E1108 00:18:31.946096 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:31.946241 kubelet[2667]: E1108 00:18:31.946216 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s7fgw" Nov 8 00:18:31.946288 kubelet[2667]: E1108 00:18:31.946257 2667 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s7fgw" Nov 8 00:18:31.946406 kubelet[2667]: E1108 00:18:31.946341 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s7fgw_calico-system(9c633ef5-243d-451b-9c89-0f760540ce13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s7fgw_calico-system(9c633ef5-243d-451b-9c89-0f760540ce13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s7fgw" podUID="9c633ef5-243d-451b-9c89-0f760540ce13" Nov 8 00:18:32.097477 kubelet[2667]: E1108 00:18:32.097407 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:32.099158 containerd[1594]: time="2025-11-08T00:18:32.099112461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z9rcx,Uid:5ec8164c-28b7-4eb6-afc2-8fbd6d62e774,Namespace:kube-system,Attempt:0,}" Nov 8 00:18:32.099246 containerd[1594]: time="2025-11-08T00:18:32.099157699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5497d898d6-c7j84,Uid:54174585-9397-4869-81c3-ea42889b85ce,Namespace:calico-system,Attempt:0,}" Nov 8 00:18:32.107445 containerd[1594]: time="2025-11-08T00:18:32.107391789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8488696cbf-5xxwb,Uid:5b1873f4-422a-4e54-9bb8-81f47889b499,Namespace:calico-system,Attempt:0,}" Nov 8 00:18:32.113145 containerd[1594]: time="2025-11-08T00:18:32.113104937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9dm45,Uid:ba4e3da5-1f7c-4476-a748-4d008501b030,Namespace:calico-system,Attempt:0,}" Nov 8 00:18:32.120427 kubelet[2667]: E1108 00:18:32.120401 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:32.120752 containerd[1594]: time="2025-11-08T00:18:32.120724372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xkvq8,Uid:83b4c340-630f-41a2-8c28-f5f9998eb1d0,Namespace:kube-system,Attempt:0,}" Nov 8 00:18:32.125179 containerd[1594]: time="2025-11-08T00:18:32.125146470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4d75b794-6dvvd,Uid:0357a036-98a8-435c-9d85-9cc2bb4428b4,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:18:32.125369 containerd[1594]: time="2025-11-08T00:18:32.125144947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4d75b794-d277s,Uid:4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:18:32.205142 containerd[1594]: time="2025-11-08T00:18:32.204995905Z" level=error msg="Failed to destroy network for sandbox \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.205860 containerd[1594]: time="2025-11-08T00:18:32.205766033Z" level=error msg="encountered an error cleaning up failed sandbox \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.205938 containerd[1594]: time="2025-11-08T00:18:32.205834687Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z9rcx,Uid:5ec8164c-28b7-4eb6-afc2-8fbd6d62e774,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.206262 kubelet[2667]: E1108 00:18:32.206211 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.206324 kubelet[2667]: E1108 00:18:32.206299 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z9rcx" Nov 8 00:18:32.206363 kubelet[2667]: E1108 00:18:32.206329 2667 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z9rcx" Nov 8 00:18:32.206682 kubelet[2667]: E1108 00:18:32.206377 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-z9rcx_kube-system(5ec8164c-28b7-4eb6-afc2-8fbd6d62e774)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-z9rcx_kube-system(5ec8164c-28b7-4eb6-afc2-8fbd6d62e774)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-z9rcx" podUID="5ec8164c-28b7-4eb6-afc2-8fbd6d62e774" Nov 8 00:18:32.313679 containerd[1594]: time="2025-11-08T00:18:32.313629657Z" level=error msg="Failed to destroy network for sandbox \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.317508 containerd[1594]: time="2025-11-08T00:18:32.317475234Z" level=error msg="encountered an error cleaning up failed sandbox \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.317516 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce-shm.mount: Deactivated successfully. Nov 8 00:18:32.318932 containerd[1594]: time="2025-11-08T00:18:32.318839437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5497d898d6-c7j84,Uid:54174585-9397-4869-81c3-ea42889b85ce,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.319556 kubelet[2667]: E1108 00:18:32.319508 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.319705 kubelet[2667]: E1108 00:18:32.319579 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5497d898d6-c7j84" Nov 8 00:18:32.319705 kubelet[2667]: E1108 00:18:32.319602 2667 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5497d898d6-c7j84" Nov 8 00:18:32.319705 kubelet[2667]: E1108 00:18:32.319643 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5497d898d6-c7j84_calico-system(54174585-9397-4869-81c3-ea42889b85ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5497d898d6-c7j84_calico-system(54174585-9397-4869-81c3-ea42889b85ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5497d898d6-c7j84" podUID="54174585-9397-4869-81c3-ea42889b85ce" Nov 8 00:18:32.353515 containerd[1594]: time="2025-11-08T00:18:32.352443064Z" level=error msg="Failed to destroy network for sandbox \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.353515 containerd[1594]: time="2025-11-08T00:18:32.353029855Z" level=error msg="encountered an error cleaning up failed sandbox \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.353515 containerd[1594]: time="2025-11-08T00:18:32.353073771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4d75b794-d277s,Uid:4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.353765 kubelet[2667]: E1108 00:18:32.353310 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.353765 kubelet[2667]: E1108 00:18:32.353381 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b4d75b794-d277s" Nov 8 00:18:32.353765 kubelet[2667]: E1108 00:18:32.353415 2667 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b4d75b794-d277s" Nov 8 00:18:32.354493 kubelet[2667]: E1108 00:18:32.353786 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b4d75b794-d277s_calico-apiserver(4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b4d75b794-d277s_calico-apiserver(4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b4d75b794-d277s" podUID="4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d" Nov 8 00:18:32.356102 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f-shm.mount: Deactivated successfully. Nov 8 00:18:32.359375 containerd[1594]: time="2025-11-08T00:18:32.357190445Z" level=error msg="Failed to destroy network for sandbox \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.360227 containerd[1594]: time="2025-11-08T00:18:32.360074522Z" level=error msg="encountered an error cleaning up failed sandbox \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.360227 containerd[1594]: time="2025-11-08T00:18:32.360126023Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8488696cbf-5xxwb,Uid:5b1873f4-422a-4e54-9bb8-81f47889b499,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.360102 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532-shm.mount: Deactivated successfully. Nov 8 00:18:32.361332 kubelet[2667]: E1108 00:18:32.360669 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.361332 kubelet[2667]: E1108 00:18:32.360832 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8488696cbf-5xxwb" Nov 8 00:18:32.361332 kubelet[2667]: E1108 00:18:32.361125 2667 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8488696cbf-5xxwb" Nov 8 00:18:32.361424 kubelet[2667]: E1108 00:18:32.361180 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8488696cbf-5xxwb_calico-system(5b1873f4-422a-4e54-9bb8-81f47889b499)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8488696cbf-5xxwb_calico-system(5b1873f4-422a-4e54-9bb8-81f47889b499)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8488696cbf-5xxwb" podUID="5b1873f4-422a-4e54-9bb8-81f47889b499" Nov 8 00:18:32.362675 containerd[1594]: time="2025-11-08T00:18:32.362562400Z" level=error msg="Failed to destroy network for sandbox \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.365065 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88-shm.mount: Deactivated successfully. Nov 8 00:18:32.365930 containerd[1594]: time="2025-11-08T00:18:32.365883637Z" level=error msg="encountered an error cleaning up failed sandbox \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.366006 containerd[1594]: time="2025-11-08T00:18:32.365937712Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4d75b794-6dvvd,Uid:0357a036-98a8-435c-9d85-9cc2bb4428b4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.366979 kubelet[2667]: E1108 00:18:32.366877 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.366979 kubelet[2667]: E1108 00:18:32.366955 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b4d75b794-6dvvd" Nov 8 00:18:32.366979 kubelet[2667]: E1108 00:18:32.366976 2667 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b4d75b794-6dvvd" Nov 8 00:18:32.367086 kubelet[2667]: E1108 00:18:32.367035 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b4d75b794-6dvvd_calico-apiserver(0357a036-98a8-435c-9d85-9cc2bb4428b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b4d75b794-6dvvd_calico-apiserver(0357a036-98a8-435c-9d85-9cc2bb4428b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b4d75b794-6dvvd" podUID="0357a036-98a8-435c-9d85-9cc2bb4428b4" Nov 8 00:18:32.374232 containerd[1594]: time="2025-11-08T00:18:32.372990264Z" level=error msg="Failed to destroy network for sandbox \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.374232 containerd[1594]: time="2025-11-08T00:18:32.373497932Z" level=error msg="encountered an error cleaning up failed sandbox \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.374232 containerd[1594]: time="2025-11-08T00:18:32.373560484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9dm45,Uid:ba4e3da5-1f7c-4476-a748-4d008501b030,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.374386 kubelet[2667]: E1108 00:18:32.373774 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.374386 kubelet[2667]: E1108 00:18:32.373819 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9dm45" Nov 8 00:18:32.374386 kubelet[2667]: E1108 00:18:32.373838 2667 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9dm45" Nov 8 00:18:32.374476 kubelet[2667]: E1108 00:18:32.373892 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-9dm45_calico-system(ba4e3da5-1f7c-4476-a748-4d008501b030)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-9dm45_calico-system(ba4e3da5-1f7c-4476-a748-4d008501b030)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9dm45" podUID="ba4e3da5-1f7c-4476-a748-4d008501b030" Nov 8 00:18:32.382305 containerd[1594]: time="2025-11-08T00:18:32.382244097Z" level=error msg="Failed to destroy network for sandbox \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.382701 containerd[1594]: time="2025-11-08T00:18:32.382667511Z" level=error msg="encountered an error cleaning up failed sandbox \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.382736 containerd[1594]: time="2025-11-08T00:18:32.382721636Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xkvq8,Uid:83b4c340-630f-41a2-8c28-f5f9998eb1d0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.382996 kubelet[2667]: E1108 00:18:32.382951 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.383058 kubelet[2667]: E1108 00:18:32.383010 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xkvq8" Nov 8 00:18:32.383058 kubelet[2667]: E1108 00:18:32.383032 2667 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xkvq8" Nov 8 00:18:32.383103 kubelet[2667]: E1108 00:18:32.383072 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xkvq8_kube-system(83b4c340-630f-41a2-8c28-f5f9998eb1d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xkvq8_kube-system(83b4c340-630f-41a2-8c28-f5f9998eb1d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xkvq8" podUID="83b4c340-630f-41a2-8c28-f5f9998eb1d0" Nov 8 00:18:32.464243 kubelet[2667]: I1108 00:18:32.464108 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Nov 8 00:18:32.465194 kubelet[2667]: I1108 00:18:32.465167 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Nov 8 00:18:32.467227 kubelet[2667]: I1108 00:18:32.466950 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Nov 8 00:18:32.482078 containerd[1594]: time="2025-11-08T00:18:32.482007893Z" level=info msg="StopPodSandbox for \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\"" Nov 8 00:18:32.483866 containerd[1594]: time="2025-11-08T00:18:32.483824195Z" level=info msg="StopPodSandbox for \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\"" Nov 8 00:18:32.485271 containerd[1594]: time="2025-11-08T00:18:32.485222082Z" level=info msg="StopPodSandbox for \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\"" Nov 8 00:18:32.488571 containerd[1594]: time="2025-11-08T00:18:32.488218609Z" level=info msg="Ensure that sandbox db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998 in task-service has been cleanup successfully" Nov 8 00:18:32.488571 containerd[1594]: time="2025-11-08T00:18:32.488228488Z" level=info msg="Ensure that sandbox 445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f in task-service has been cleanup successfully" Nov 8 00:18:32.488571 containerd[1594]: time="2025-11-08T00:18:32.488229370Z" level=info msg="Ensure that sandbox db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795 in task-service has been cleanup successfully" Nov 8 00:18:32.514468 kubelet[2667]: E1108 00:18:32.514431 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:32.517881 containerd[1594]: time="2025-11-08T00:18:32.517674171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:18:32.521119 kubelet[2667]: I1108 00:18:32.521095 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Nov 8 00:18:32.522539 containerd[1594]: time="2025-11-08T00:18:32.521812287Z" level=info msg="StopPodSandbox for \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\"" Nov 8 00:18:32.523062 containerd[1594]: time="2025-11-08T00:18:32.523036568Z" level=info msg="Ensure that sandbox 455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88 in task-service has been cleanup successfully" Nov 8 00:18:32.525829 kubelet[2667]: I1108 00:18:32.525770 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Nov 8 00:18:32.527818 containerd[1594]: time="2025-11-08T00:18:32.526998030Z" level=info msg="StopPodSandbox for \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\"" Nov 8 00:18:32.528378 kubelet[2667]: I1108 00:18:32.528359 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Nov 8 00:18:32.528966 containerd[1594]: time="2025-11-08T00:18:32.528940076Z" level=info msg="StopPodSandbox for \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\"" Nov 8 00:18:32.536325 containerd[1594]: time="2025-11-08T00:18:32.531258564Z" level=info msg="Ensure that sandbox c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c in task-service has been cleanup successfully" Nov 8 00:18:32.541285 containerd[1594]: time="2025-11-08T00:18:32.541240963Z" level=info msg="Ensure that sandbox 8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60 in task-service has been cleanup successfully" Nov 8 00:18:32.542888 containerd[1594]: time="2025-11-08T00:18:32.542139670Z" level=error msg="StopPodSandbox for \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\" failed" error="failed to destroy network for sandbox \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.542960 kubelet[2667]: E1108 00:18:32.542469 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Nov 8 00:18:32.542960 kubelet[2667]: E1108 00:18:32.542541 2667 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f"} Nov 8 00:18:32.542960 kubelet[2667]: E1108 00:18:32.542615 2667 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:18:32.542960 kubelet[2667]: E1108 00:18:32.542638 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b4d75b794-d277s" podUID="4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d" Nov 8 00:18:32.543207 kubelet[2667]: I1108 00:18:32.543050 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Nov 8 00:18:32.545159 containerd[1594]: time="2025-11-08T00:18:32.545074096Z" level=info msg="StopPodSandbox for \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\"" Nov 8 00:18:32.546867 containerd[1594]: time="2025-11-08T00:18:32.546823838Z" level=info msg="Ensure that sandbox 11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532 in task-service has been cleanup successfully" Nov 8 00:18:32.548011 kubelet[2667]: I1108 00:18:32.547201 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Nov 8 00:18:32.548461 containerd[1594]: time="2025-11-08T00:18:32.548426214Z" level=info msg="StopPodSandbox for \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\"" Nov 8 00:18:32.550096 containerd[1594]: time="2025-11-08T00:18:32.548609820Z" level=info msg="Ensure that sandbox a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce in task-service has been cleanup successfully" Nov 8 00:18:32.571645 containerd[1594]: time="2025-11-08T00:18:32.571593939Z" level=error msg="StopPodSandbox for \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\" failed" error="failed to destroy network for sandbox \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.572244 kubelet[2667]: E1108 00:18:32.572059 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Nov 8 00:18:32.572244 kubelet[2667]: E1108 00:18:32.572134 2667 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998"} Nov 8 00:18:32.572244 kubelet[2667]: E1108 00:18:32.572173 2667 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5ec8164c-28b7-4eb6-afc2-8fbd6d62e774\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:18:32.572244 kubelet[2667]: E1108 00:18:32.572216 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5ec8164c-28b7-4eb6-afc2-8fbd6d62e774\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-z9rcx" podUID="5ec8164c-28b7-4eb6-afc2-8fbd6d62e774" Nov 8 00:18:32.574416 containerd[1594]: time="2025-11-08T00:18:32.574073391Z" level=error msg="StopPodSandbox for \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\" failed" error="failed to destroy network for sandbox \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.574484 kubelet[2667]: E1108 00:18:32.574180 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Nov 8 00:18:32.574484 kubelet[2667]: E1108 00:18:32.574204 2667 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795"} Nov 8 00:18:32.574484 kubelet[2667]: E1108 00:18:32.574225 2667 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c633ef5-243d-451b-9c89-0f760540ce13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:18:32.574484 kubelet[2667]: E1108 00:18:32.574247 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c633ef5-243d-451b-9c89-0f760540ce13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s7fgw" podUID="9c633ef5-243d-451b-9c89-0f760540ce13" Nov 8 00:18:32.595924 containerd[1594]: time="2025-11-08T00:18:32.595698087Z" level=error msg="StopPodSandbox for \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\" failed" error="failed to destroy network for sandbox \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.596866 kubelet[2667]: E1108 00:18:32.596046 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Nov 8 00:18:32.596866 kubelet[2667]: E1108 00:18:32.596125 2667 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532"} Nov 8 00:18:32.596866 kubelet[2667]: E1108 00:18:32.596160 2667 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5b1873f4-422a-4e54-9bb8-81f47889b499\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:18:32.596866 kubelet[2667]: E1108 00:18:32.596186 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5b1873f4-422a-4e54-9bb8-81f47889b499\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8488696cbf-5xxwb" podUID="5b1873f4-422a-4e54-9bb8-81f47889b499" Nov 8 00:18:32.597062 containerd[1594]: time="2025-11-08T00:18:32.596947426Z" level=error msg="StopPodSandbox for \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\" failed" error="failed to destroy network for sandbox \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.597160 kubelet[2667]: E1108 00:18:32.597127 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Nov 8 00:18:32.597199 kubelet[2667]: E1108 00:18:32.597159 2667 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88"} Nov 8 00:18:32.597199 kubelet[2667]: E1108 00:18:32.597185 2667 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0357a036-98a8-435c-9d85-9cc2bb4428b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:18:32.597267 kubelet[2667]: E1108 00:18:32.597202 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0357a036-98a8-435c-9d85-9cc2bb4428b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b4d75b794-6dvvd" podUID="0357a036-98a8-435c-9d85-9cc2bb4428b4" Nov 8 00:18:32.597685 containerd[1594]: time="2025-11-08T00:18:32.597636977Z" level=error msg="StopPodSandbox for \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\" failed" error="failed to destroy network for sandbox \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.597922 kubelet[2667]: E1108 00:18:32.597828 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Nov 8 00:18:32.598097 kubelet[2667]: E1108 00:18:32.597930 2667 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce"} Nov 8 00:18:32.598097 kubelet[2667]: E1108 00:18:32.597985 2667 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"54174585-9397-4869-81c3-ea42889b85ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:18:32.598097 kubelet[2667]: E1108 00:18:32.598014 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"54174585-9397-4869-81c3-ea42889b85ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5497d898d6-c7j84" podUID="54174585-9397-4869-81c3-ea42889b85ce" Nov 8 00:18:32.603977 containerd[1594]: time="2025-11-08T00:18:32.603938860Z" level=error msg="StopPodSandbox for \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\" failed" error="failed to destroy network for sandbox \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.604180 kubelet[2667]: E1108 00:18:32.604127 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Nov 8 00:18:32.604232 kubelet[2667]: E1108 00:18:32.604188 2667 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60"} Nov 8 00:18:32.604270 kubelet[2667]: E1108 00:18:32.604251 2667 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba4e3da5-1f7c-4476-a748-4d008501b030\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:18:32.604330 kubelet[2667]: E1108 00:18:32.604278 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba4e3da5-1f7c-4476-a748-4d008501b030\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9dm45" podUID="ba4e3da5-1f7c-4476-a748-4d008501b030" Nov 8 00:18:32.605883 containerd[1594]: time="2025-11-08T00:18:32.605840137Z" level=error msg="StopPodSandbox for \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\" failed" error="failed to destroy network for sandbox \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:18:32.606018 kubelet[2667]: E1108 00:18:32.605979 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Nov 8 00:18:32.606018 kubelet[2667]: E1108 00:18:32.606011 2667 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c"} Nov 8 00:18:32.606088 kubelet[2667]: E1108 00:18:32.606030 2667 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"83b4c340-630f-41a2-8c28-f5f9998eb1d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:18:32.606088 kubelet[2667]: E1108 00:18:32.606048 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"83b4c340-630f-41a2-8c28-f5f9998eb1d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xkvq8" podUID="83b4c340-630f-41a2-8c28-f5f9998eb1d0" Nov 8 00:18:33.242961 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c-shm.mount: Deactivated successfully. Nov 8 00:18:33.243177 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60-shm.mount: Deactivated successfully. Nov 8 00:18:36.354319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3745950920.mount: Deactivated successfully. Nov 8 00:18:38.009004 containerd[1594]: time="2025-11-08T00:18:38.008931191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:18:38.010173 containerd[1594]: time="2025-11-08T00:18:38.010103897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:18:38.011673 containerd[1594]: time="2025-11-08T00:18:38.011611521Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:18:38.017237 containerd[1594]: time="2025-11-08T00:18:38.017191065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:18:38.018415 containerd[1594]: time="2025-11-08T00:18:38.018368931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.500645255s" Nov 8 00:18:38.018415 containerd[1594]: time="2025-11-08T00:18:38.018414690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:18:38.027178 containerd[1594]: time="2025-11-08T00:18:38.027124694Z" level=info msg="CreateContainer within sandbox \"1f626acb3497a72829b040b79fad224aa1c4df9c60c16115aecda4238dd0085c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:18:38.047569 containerd[1594]: time="2025-11-08T00:18:38.047525966Z" level=info msg="CreateContainer within sandbox \"1f626acb3497a72829b040b79fad224aa1c4df9c60c16115aecda4238dd0085c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"531ab508135d9439b0f8fae249b8d66876db8b028e907f364923de28cad979ad\"" Nov 8 00:18:38.048210 containerd[1594]: time="2025-11-08T00:18:38.048161354Z" level=info msg="StartContainer for \"531ab508135d9439b0f8fae249b8d66876db8b028e907f364923de28cad979ad\"" Nov 8 00:18:38.134812 containerd[1594]: time="2025-11-08T00:18:38.134690245Z" level=info msg="StartContainer for \"531ab508135d9439b0f8fae249b8d66876db8b028e907f364923de28cad979ad\" returns successfully" Nov 8 00:18:38.230274 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:18:38.230484 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:18:38.434683 containerd[1594]: time="2025-11-08T00:18:38.434354579Z" level=info msg="StopPodSandbox for \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\"" Nov 8 00:18:38.571800 kubelet[2667]: E1108 00:18:38.571758 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:38.587532 kubelet[2667]: I1108 00:18:38.587433 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fw7ts" podStartSLOduration=1.5121775880000001 podStartE2EDuration="19.587413318s" podCreationTimestamp="2025-11-08 00:18:19 +0000 UTC" firstStartedPulling="2025-11-08 00:18:19.944139808 +0000 UTC m=+18.668532090" lastFinishedPulling="2025-11-08 00:18:38.019375538 +0000 UTC m=+36.743767820" observedRunningTime="2025-11-08 00:18:38.58575859 +0000 UTC m=+37.310150872" watchObservedRunningTime="2025-11-08 00:18:38.587413318 +0000 UTC m=+37.311805600" Nov 8 00:18:38.599717 containerd[1594]: 2025-11-08 00:18:38.500 [INFO][3972] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Nov 8 00:18:38.599717 containerd[1594]: 2025-11-08 00:18:38.501 [INFO][3972] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" iface="eth0" netns="/var/run/netns/cni-62054a0f-451b-fb76-9671-8ae72d11d224" Nov 8 00:18:38.599717 containerd[1594]: 2025-11-08 00:18:38.501 [INFO][3972] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" iface="eth0" netns="/var/run/netns/cni-62054a0f-451b-fb76-9671-8ae72d11d224" Nov 8 00:18:38.599717 containerd[1594]: 2025-11-08 00:18:38.502 [INFO][3972] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" iface="eth0" netns="/var/run/netns/cni-62054a0f-451b-fb76-9671-8ae72d11d224" Nov 8 00:18:38.599717 containerd[1594]: 2025-11-08 00:18:38.502 [INFO][3972] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Nov 8 00:18:38.599717 containerd[1594]: 2025-11-08 00:18:38.502 [INFO][3972] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Nov 8 00:18:38.599717 containerd[1594]: 2025-11-08 00:18:38.570 [INFO][3981] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" HandleID="k8s-pod-network.11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Workload="localhost-k8s-whisker--8488696cbf--5xxwb-eth0" Nov 8 00:18:38.599717 containerd[1594]: 2025-11-08 00:18:38.574 [INFO][3981] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:38.599717 containerd[1594]: 2025-11-08 00:18:38.574 [INFO][3981] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:38.599717 containerd[1594]: 2025-11-08 00:18:38.590 [WARNING][3981] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" HandleID="k8s-pod-network.11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Workload="localhost-k8s-whisker--8488696cbf--5xxwb-eth0" Nov 8 00:18:38.599717 containerd[1594]: 2025-11-08 00:18:38.590 [INFO][3981] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" HandleID="k8s-pod-network.11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Workload="localhost-k8s-whisker--8488696cbf--5xxwb-eth0" Nov 8 00:18:38.599717 containerd[1594]: 2025-11-08 00:18:38.593 [INFO][3981] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:38.599717 containerd[1594]: 2025-11-08 00:18:38.596 [INFO][3972] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Nov 8 00:18:38.601564 containerd[1594]: time="2025-11-08T00:18:38.601491558Z" level=info msg="TearDown network for sandbox \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\" successfully" Nov 8 00:18:38.601564 containerd[1594]: time="2025-11-08T00:18:38.601539410Z" level=info msg="StopPodSandbox for \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\" returns successfully" Nov 8 00:18:38.623812 kubelet[2667]: I1108 00:18:38.623078 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b1873f4-422a-4e54-9bb8-81f47889b499-whisker-ca-bundle\") pod \"5b1873f4-422a-4e54-9bb8-81f47889b499\" (UID: \"5b1873f4-422a-4e54-9bb8-81f47889b499\") " Nov 8 00:18:38.623812 kubelet[2667]: I1108 00:18:38.623132 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5b1873f4-422a-4e54-9bb8-81f47889b499-whisker-backend-key-pair\") pod \"5b1873f4-422a-4e54-9bb8-81f47889b499\" (UID: \"5b1873f4-422a-4e54-9bb8-81f47889b499\") " Nov 8 00:18:38.623812 kubelet[2667]: I1108 00:18:38.623153 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqtwl\" (UniqueName: \"kubernetes.io/projected/5b1873f4-422a-4e54-9bb8-81f47889b499-kube-api-access-sqtwl\") pod \"5b1873f4-422a-4e54-9bb8-81f47889b499\" (UID: \"5b1873f4-422a-4e54-9bb8-81f47889b499\") " Nov 8 00:18:38.625688 kubelet[2667]: I1108 00:18:38.625630 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b1873f4-422a-4e54-9bb8-81f47889b499-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5b1873f4-422a-4e54-9bb8-81f47889b499" (UID: "5b1873f4-422a-4e54-9bb8-81f47889b499"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:18:38.630701 kubelet[2667]: I1108 00:18:38.630640 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b1873f4-422a-4e54-9bb8-81f47889b499-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5b1873f4-422a-4e54-9bb8-81f47889b499" (UID: "5b1873f4-422a-4e54-9bb8-81f47889b499"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:18:38.630902 kubelet[2667]: I1108 00:18:38.630872 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b1873f4-422a-4e54-9bb8-81f47889b499-kube-api-access-sqtwl" (OuterVolumeSpecName: "kube-api-access-sqtwl") pod "5b1873f4-422a-4e54-9bb8-81f47889b499" (UID: "5b1873f4-422a-4e54-9bb8-81f47889b499"). InnerVolumeSpecName "kube-api-access-sqtwl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:18:38.724057 kubelet[2667]: I1108 00:18:38.724007 2667 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5b1873f4-422a-4e54-9bb8-81f47889b499-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 8 00:18:38.724057 kubelet[2667]: I1108 00:18:38.724046 2667 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sqtwl\" (UniqueName: \"kubernetes.io/projected/5b1873f4-422a-4e54-9bb8-81f47889b499-kube-api-access-sqtwl\") on node \"localhost\" DevicePath \"\"" Nov 8 00:18:38.724057 kubelet[2667]: I1108 00:18:38.724056 2667 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b1873f4-422a-4e54-9bb8-81f47889b499-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 8 00:18:39.025978 systemd[1]: run-netns-cni\x2d62054a0f\x2d451b\x2dfb76\x2d9671\x2d8ae72d11d224.mount: Deactivated successfully. Nov 8 00:18:39.026219 systemd[1]: var-lib-kubelet-pods-5b1873f4\x2d422a\x2d4e54\x2d9bb8\x2d81f47889b499-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsqtwl.mount: Deactivated successfully. Nov 8 00:18:39.026389 systemd[1]: var-lib-kubelet-pods-5b1873f4\x2d422a\x2d4e54\x2d9bb8\x2d81f47889b499-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:18:39.572220 kubelet[2667]: E1108 00:18:39.572169 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:40.222918 kernel: bpftool[4177]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:18:40.235559 kubelet[2667]: I1108 00:18:40.235501 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcacda57-dec5-4042-b890-adc5f9a1885e-whisker-ca-bundle\") pod \"whisker-6f98c486d9-btd2s\" (UID: \"bcacda57-dec5-4042-b890-adc5f9a1885e\") " pod="calico-system/whisker-6f98c486d9-btd2s" Nov 8 00:18:40.235559 kubelet[2667]: I1108 00:18:40.235551 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bcacda57-dec5-4042-b890-adc5f9a1885e-whisker-backend-key-pair\") pod \"whisker-6f98c486d9-btd2s\" (UID: \"bcacda57-dec5-4042-b890-adc5f9a1885e\") " pod="calico-system/whisker-6f98c486d9-btd2s" Nov 8 00:18:40.235826 kubelet[2667]: I1108 00:18:40.235575 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6k2h\" (UniqueName: \"kubernetes.io/projected/bcacda57-dec5-4042-b890-adc5f9a1885e-kube-api-access-z6k2h\") pod \"whisker-6f98c486d9-btd2s\" (UID: \"bcacda57-dec5-4042-b890-adc5f9a1885e\") " pod="calico-system/whisker-6f98c486d9-btd2s" Nov 8 00:18:40.471491 containerd[1594]: time="2025-11-08T00:18:40.470456117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f98c486d9-btd2s,Uid:bcacda57-dec5-4042-b890-adc5f9a1885e,Namespace:calico-system,Attempt:0,}" Nov 8 00:18:40.476647 systemd-networkd[1256]: vxlan.calico: Link UP Nov 8 00:18:40.476657 systemd-networkd[1256]: vxlan.calico: Gained carrier Nov 8 00:18:40.718372 systemd[1]: Started sshd@7-10.0.0.26:22-10.0.0.1:38502.service - OpenSSH per-connection server daemon (10.0.0.1:38502). Nov 8 00:18:40.770522 sshd[4219]: Accepted publickey for core from 10.0.0.1 port 38502 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:18:40.773507 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:18:40.781553 systemd-logind[1568]: New session 8 of user core. Nov 8 00:18:40.789278 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:18:40.856679 systemd-networkd[1256]: cali2ce3f28bdd1: Link UP Nov 8 00:18:40.857925 systemd-networkd[1256]: cali2ce3f28bdd1: Gained carrier Nov 8 00:18:40.878088 containerd[1594]: 2025-11-08 00:18:40.782 [INFO][4225] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6f98c486d9--btd2s-eth0 whisker-6f98c486d9- calico-system bcacda57-dec5-4042-b890-adc5f9a1885e 975 0 2025-11-08 00:18:39 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6f98c486d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6f98c486d9-btd2s eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2ce3f28bdd1 [] [] }} ContainerID="65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef" Namespace="calico-system" Pod="whisker-6f98c486d9-btd2s" WorkloadEndpoint="localhost-k8s-whisker--6f98c486d9--btd2s-" Nov 8 00:18:40.878088 containerd[1594]: 2025-11-08 00:18:40.782 [INFO][4225] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef" Namespace="calico-system" Pod="whisker-6f98c486d9-btd2s" WorkloadEndpoint="localhost-k8s-whisker--6f98c486d9--btd2s-eth0" Nov 8 00:18:40.878088 containerd[1594]: 2025-11-08 00:18:40.813 [INFO][4241] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef" HandleID="k8s-pod-network.65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef" Workload="localhost-k8s-whisker--6f98c486d9--btd2s-eth0" Nov 8 00:18:40.878088 containerd[1594]: 2025-11-08 00:18:40.813 [INFO][4241] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef" HandleID="k8s-pod-network.65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef" Workload="localhost-k8s-whisker--6f98c486d9--btd2s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d080), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6f98c486d9-btd2s", "timestamp":"2025-11-08 00:18:40.813543914 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:18:40.878088 containerd[1594]: 2025-11-08 00:18:40.813 [INFO][4241] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:40.878088 containerd[1594]: 2025-11-08 00:18:40.813 [INFO][4241] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:40.878088 containerd[1594]: 2025-11-08 00:18:40.814 [INFO][4241] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:18:40.878088 containerd[1594]: 2025-11-08 00:18:40.820 [INFO][4241] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef" host="localhost" Nov 8 00:18:40.878088 containerd[1594]: 2025-11-08 00:18:40.826 [INFO][4241] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:18:40.878088 containerd[1594]: 2025-11-08 00:18:40.831 [INFO][4241] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:18:40.878088 containerd[1594]: 2025-11-08 00:18:40.834 [INFO][4241] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:40.878088 containerd[1594]: 2025-11-08 00:18:40.835 [INFO][4241] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:40.878088 containerd[1594]: 2025-11-08 00:18:40.835 [INFO][4241] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef" host="localhost" Nov 8 00:18:40.878088 containerd[1594]: 2025-11-08 00:18:40.837 [INFO][4241] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef Nov 8 00:18:40.878088 containerd[1594]: 2025-11-08 00:18:40.840 [INFO][4241] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef" host="localhost" Nov 8 00:18:40.878088 containerd[1594]: 2025-11-08 00:18:40.848 [INFO][4241] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef" host="localhost" Nov 8 00:18:40.878088 containerd[1594]: 2025-11-08 00:18:40.848 [INFO][4241] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef" host="localhost" Nov 8 00:18:40.878088 containerd[1594]: 2025-11-08 00:18:40.848 [INFO][4241] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:40.878088 containerd[1594]: 2025-11-08 00:18:40.848 [INFO][4241] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef" HandleID="k8s-pod-network.65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef" Workload="localhost-k8s-whisker--6f98c486d9--btd2s-eth0" Nov 8 00:18:40.878700 containerd[1594]: 2025-11-08 00:18:40.853 [INFO][4225] cni-plugin/k8s.go 418: Populated endpoint ContainerID="65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef" Namespace="calico-system" Pod="whisker-6f98c486d9-btd2s" WorkloadEndpoint="localhost-k8s-whisker--6f98c486d9--btd2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6f98c486d9--btd2s-eth0", GenerateName:"whisker-6f98c486d9-", Namespace:"calico-system", SelfLink:"", UID:"bcacda57-dec5-4042-b890-adc5f9a1885e", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f98c486d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6f98c486d9-btd2s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2ce3f28bdd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:40.878700 containerd[1594]: 2025-11-08 00:18:40.853 [INFO][4225] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef" Namespace="calico-system" Pod="whisker-6f98c486d9-btd2s" WorkloadEndpoint="localhost-k8s-whisker--6f98c486d9--btd2s-eth0" Nov 8 00:18:40.878700 containerd[1594]: 2025-11-08 00:18:40.853 [INFO][4225] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ce3f28bdd1 ContainerID="65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef" Namespace="calico-system" Pod="whisker-6f98c486d9-btd2s" WorkloadEndpoint="localhost-k8s-whisker--6f98c486d9--btd2s-eth0" Nov 8 00:18:40.878700 containerd[1594]: 2025-11-08 00:18:40.858 [INFO][4225] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef" Namespace="calico-system" Pod="whisker-6f98c486d9-btd2s" WorkloadEndpoint="localhost-k8s-whisker--6f98c486d9--btd2s-eth0" Nov 8 00:18:40.878700 containerd[1594]: 2025-11-08 00:18:40.858 [INFO][4225] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef" Namespace="calico-system" Pod="whisker-6f98c486d9-btd2s" WorkloadEndpoint="localhost-k8s-whisker--6f98c486d9--btd2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6f98c486d9--btd2s-eth0", GenerateName:"whisker-6f98c486d9-", Namespace:"calico-system", SelfLink:"", UID:"bcacda57-dec5-4042-b890-adc5f9a1885e", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f98c486d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef", Pod:"whisker-6f98c486d9-btd2s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2ce3f28bdd1", MAC:"e6:66:01:24:93:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:40.878700 containerd[1594]: 2025-11-08 00:18:40.874 [INFO][4225] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef" Namespace="calico-system" Pod="whisker-6f98c486d9-btd2s" WorkloadEndpoint="localhost-k8s-whisker--6f98c486d9--btd2s-eth0" Nov 8 00:18:40.919907 containerd[1594]: time="2025-11-08T00:18:40.917495872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:18:40.919907 containerd[1594]: time="2025-11-08T00:18:40.917585805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:18:40.919907 containerd[1594]: time="2025-11-08T00:18:40.917599743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:40.919907 containerd[1594]: time="2025-11-08T00:18:40.919752066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:40.970101 sshd[4219]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:40.970340 systemd-resolved[1477]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:18:40.974782 systemd-logind[1568]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:18:40.978839 systemd[1]: sshd@7-10.0.0.26:22-10.0.0.1:38502.service: Deactivated successfully. Nov 8 00:18:40.982956 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:18:40.984697 systemd-logind[1568]: Removed session 8. Nov 8 00:18:41.004669 containerd[1594]: time="2025-11-08T00:18:41.004622555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f98c486d9-btd2s,Uid:bcacda57-dec5-4042-b890-adc5f9a1885e,Namespace:calico-system,Attempt:0,} returns sandbox id \"65e75f1005a24a3ce0bf45a2bef147298af62a6ecbf9f0e0fc8cf5ebbd6fceef\"" Nov 8 00:18:41.007136 containerd[1594]: time="2025-11-08T00:18:41.007110603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:18:41.355236 kubelet[2667]: I1108 00:18:41.355187 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b1873f4-422a-4e54-9bb8-81f47889b499" path="/var/lib/kubelet/pods/5b1873f4-422a-4e54-9bb8-81f47889b499/volumes" Nov 8 00:18:41.910312 containerd[1594]: time="2025-11-08T00:18:41.910242257Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:41.917820 containerd[1594]: time="2025-11-08T00:18:41.911678858Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:18:41.917952 containerd[1594]: time="2025-11-08T00:18:41.911761667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:18:41.918137 kubelet[2667]: E1108 00:18:41.918068 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:18:41.918205 kubelet[2667]: E1108 00:18:41.918146 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:18:41.921711 kubelet[2667]: E1108 00:18:41.921626 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:27502ee819424dd68f8b3ed29bc94e26,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6k2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f98c486d9-btd2s_calico-system(bcacda57-dec5-4042-b890-adc5f9a1885e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:41.923491 containerd[1594]: time="2025-11-08T00:18:41.923458174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:18:42.029018 systemd-networkd[1256]: cali2ce3f28bdd1: Gained IPv6LL Nov 8 00:18:42.248376 containerd[1594]: time="2025-11-08T00:18:42.248307656Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:42.249546 containerd[1594]: time="2025-11-08T00:18:42.249499543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:18:42.249693 containerd[1594]: time="2025-11-08T00:18:42.249594055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:18:42.249798 kubelet[2667]: E1108 00:18:42.249746 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:18:42.249890 kubelet[2667]: E1108 00:18:42.249808 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:18:42.250568 kubelet[2667]: E1108 00:18:42.249971 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6k2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f98c486d9-btd2s_calico-system(bcacda57-dec5-4042-b890-adc5f9a1885e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:42.251232 kubelet[2667]: E1108 00:18:42.251184 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f98c486d9-btd2s" podUID="bcacda57-dec5-4042-b890-adc5f9a1885e" Nov 8 00:18:42.285121 systemd-networkd[1256]: vxlan.calico: Gained IPv6LL Nov 8 00:18:42.585804 kubelet[2667]: E1108 00:18:42.585478 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f98c486d9-btd2s" podUID="bcacda57-dec5-4042-b890-adc5f9a1885e" Nov 8 00:18:43.354496 containerd[1594]: time="2025-11-08T00:18:43.354075509Z" level=info msg="StopPodSandbox for \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\"" Nov 8 00:18:43.434541 containerd[1594]: 2025-11-08 00:18:43.397 [INFO][4365] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Nov 8 00:18:43.434541 containerd[1594]: 2025-11-08 00:18:43.398 [INFO][4365] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" iface="eth0" netns="/var/run/netns/cni-510498a0-4b84-bd8a-33bd-e35d84b5eced" Nov 8 00:18:43.434541 containerd[1594]: 2025-11-08 00:18:43.398 [INFO][4365] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" iface="eth0" netns="/var/run/netns/cni-510498a0-4b84-bd8a-33bd-e35d84b5eced" Nov 8 00:18:43.434541 containerd[1594]: 2025-11-08 00:18:43.399 [INFO][4365] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" iface="eth0" netns="/var/run/netns/cni-510498a0-4b84-bd8a-33bd-e35d84b5eced" Nov 8 00:18:43.434541 containerd[1594]: 2025-11-08 00:18:43.399 [INFO][4365] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Nov 8 00:18:43.434541 containerd[1594]: 2025-11-08 00:18:43.399 [INFO][4365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Nov 8 00:18:43.434541 containerd[1594]: 2025-11-08 00:18:43.421 [INFO][4373] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" HandleID="k8s-pod-network.db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Workload="localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0" Nov 8 00:18:43.434541 containerd[1594]: 2025-11-08 00:18:43.421 [INFO][4373] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:43.434541 containerd[1594]: 2025-11-08 00:18:43.421 [INFO][4373] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:43.434541 containerd[1594]: 2025-11-08 00:18:43.427 [WARNING][4373] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" HandleID="k8s-pod-network.db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Workload="localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0" Nov 8 00:18:43.434541 containerd[1594]: 2025-11-08 00:18:43.427 [INFO][4373] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" HandleID="k8s-pod-network.db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Workload="localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0" Nov 8 00:18:43.434541 containerd[1594]: 2025-11-08 00:18:43.428 [INFO][4373] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:43.434541 containerd[1594]: 2025-11-08 00:18:43.431 [INFO][4365] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Nov 8 00:18:43.434972 containerd[1594]: time="2025-11-08T00:18:43.434758126Z" level=info msg="TearDown network for sandbox \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\" successfully" Nov 8 00:18:43.434972 containerd[1594]: time="2025-11-08T00:18:43.434798183Z" level=info msg="StopPodSandbox for \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\" returns successfully" Nov 8 00:18:43.435256 kubelet[2667]: E1108 00:18:43.435220 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:43.436528 containerd[1594]: time="2025-11-08T00:18:43.435996080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z9rcx,Uid:5ec8164c-28b7-4eb6-afc2-8fbd6d62e774,Namespace:kube-system,Attempt:1,}" Nov 8 00:18:43.437815 systemd[1]: run-netns-cni\x2d510498a0\x2d4b84\x2dbd8a\x2d33bd\x2de35d84b5eced.mount: Deactivated successfully. Nov 8 00:18:43.564626 systemd-networkd[1256]: calie96ab54eabd: Link UP Nov 8 00:18:43.567889 systemd-networkd[1256]: calie96ab54eabd: Gained carrier Nov 8 00:18:43.598392 containerd[1594]: 2025-11-08 00:18:43.480 [INFO][4380] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0 coredns-668d6bf9bc- kube-system 5ec8164c-28b7-4eb6-afc2-8fbd6d62e774 1037 0 2025-11-08 00:18:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-z9rcx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie96ab54eabd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9rcx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9rcx-" Nov 8 00:18:43.598392 containerd[1594]: 2025-11-08 00:18:43.480 [INFO][4380] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9rcx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0" Nov 8 00:18:43.598392 containerd[1594]: 2025-11-08 00:18:43.504 [INFO][4396] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a" HandleID="k8s-pod-network.e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a" Workload="localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0" Nov 8 00:18:43.598392 containerd[1594]: 2025-11-08 00:18:43.505 [INFO][4396] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a" HandleID="k8s-pod-network.e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a" Workload="localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001397f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-z9rcx", "timestamp":"2025-11-08 00:18:43.504967892 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:18:43.598392 containerd[1594]: 2025-11-08 00:18:43.505 [INFO][4396] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:43.598392 containerd[1594]: 2025-11-08 00:18:43.505 [INFO][4396] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:43.598392 containerd[1594]: 2025-11-08 00:18:43.505 [INFO][4396] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:18:43.598392 containerd[1594]: 2025-11-08 00:18:43.512 [INFO][4396] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a" host="localhost" Nov 8 00:18:43.598392 containerd[1594]: 2025-11-08 00:18:43.516 [INFO][4396] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:18:43.598392 containerd[1594]: 2025-11-08 00:18:43.520 [INFO][4396] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:18:43.598392 containerd[1594]: 2025-11-08 00:18:43.521 [INFO][4396] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:43.598392 containerd[1594]: 2025-11-08 00:18:43.523 [INFO][4396] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:43.598392 containerd[1594]: 2025-11-08 00:18:43.524 [INFO][4396] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a" host="localhost" Nov 8 00:18:43.598392 containerd[1594]: 2025-11-08 00:18:43.525 [INFO][4396] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a Nov 8 00:18:43.598392 containerd[1594]: 2025-11-08 00:18:43.528 [INFO][4396] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a" host="localhost" Nov 8 00:18:43.598392 containerd[1594]: 2025-11-08 00:18:43.536 [INFO][4396] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a" host="localhost" Nov 8 00:18:43.598392 containerd[1594]: 2025-11-08 00:18:43.536 [INFO][4396] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a" host="localhost" Nov 8 00:18:43.598392 containerd[1594]: 2025-11-08 00:18:43.536 [INFO][4396] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:43.598392 containerd[1594]: 2025-11-08 00:18:43.536 [INFO][4396] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a" HandleID="k8s-pod-network.e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a" Workload="localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0" Nov 8 00:18:43.599056 containerd[1594]: 2025-11-08 00:18:43.553 [INFO][4380] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9rcx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5ec8164c-28b7-4eb6-afc2-8fbd6d62e774", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-z9rcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie96ab54eabd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:43.599056 containerd[1594]: 2025-11-08 00:18:43.553 [INFO][4380] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9rcx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0" Nov 8 00:18:43.599056 containerd[1594]: 2025-11-08 00:18:43.553 [INFO][4380] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie96ab54eabd ContainerID="e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9rcx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0" Nov 8 00:18:43.599056 containerd[1594]: 2025-11-08 00:18:43.583 [INFO][4380] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9rcx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0" Nov 8 00:18:43.599056 containerd[1594]: 2025-11-08 00:18:43.583 [INFO][4380] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9rcx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5ec8164c-28b7-4eb6-afc2-8fbd6d62e774", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a", Pod:"coredns-668d6bf9bc-z9rcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie96ab54eabd", MAC:"6e:ec:d1:c8:50:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:43.599056 containerd[1594]: 2025-11-08 00:18:43.595 [INFO][4380] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9rcx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0" Nov 8 00:18:43.617581 containerd[1594]: time="2025-11-08T00:18:43.617310567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:18:43.617581 containerd[1594]: time="2025-11-08T00:18:43.617378277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:18:43.617581 containerd[1594]: time="2025-11-08T00:18:43.617392515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:43.618195 containerd[1594]: time="2025-11-08T00:18:43.618132200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:43.649449 systemd-resolved[1477]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:18:43.687218 containerd[1594]: time="2025-11-08T00:18:43.686685335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z9rcx,Uid:5ec8164c-28b7-4eb6-afc2-8fbd6d62e774,Namespace:kube-system,Attempt:1,} returns sandbox id \"e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a\"" Nov 8 00:18:43.688088 kubelet[2667]: E1108 00:18:43.688038 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:43.690982 containerd[1594]: time="2025-11-08T00:18:43.690951001Z" level=info msg="CreateContainer within sandbox \"e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:18:43.720708 containerd[1594]: time="2025-11-08T00:18:43.720627519Z" level=info msg="CreateContainer within sandbox \"e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"186f874e4ff106fca3b7dea0dc146554ad358776282f3532c2d17fbeec3ff9d5\"" Nov 8 00:18:43.721274 containerd[1594]: time="2025-11-08T00:18:43.721244327Z" level=info msg="StartContainer for \"186f874e4ff106fca3b7dea0dc146554ad358776282f3532c2d17fbeec3ff9d5\"" Nov 8 00:18:43.781426 containerd[1594]: time="2025-11-08T00:18:43.781379609Z" level=info msg="StartContainer for \"186f874e4ff106fca3b7dea0dc146554ad358776282f3532c2d17fbeec3ff9d5\" returns successfully" Nov 8 00:18:44.353579 containerd[1594]: time="2025-11-08T00:18:44.353515331Z" level=info msg="StopPodSandbox for \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\"" Nov 8 00:18:44.353936 containerd[1594]: time="2025-11-08T00:18:44.353675570Z" level=info msg="StopPodSandbox for \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\"" Nov 8 00:18:44.354303 containerd[1594]: time="2025-11-08T00:18:44.354240116Z" level=info msg="StopPodSandbox for \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\"" Nov 8 00:18:44.465136 containerd[1594]: 2025-11-08 00:18:44.415 [INFO][4529] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Nov 8 00:18:44.465136 containerd[1594]: 2025-11-08 00:18:44.416 [INFO][4529] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" iface="eth0" netns="/var/run/netns/cni-115426dc-9bfd-6b50-95c2-4027bb21adef" Nov 8 00:18:44.465136 containerd[1594]: 2025-11-08 00:18:44.416 [INFO][4529] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" iface="eth0" netns="/var/run/netns/cni-115426dc-9bfd-6b50-95c2-4027bb21adef" Nov 8 00:18:44.465136 containerd[1594]: 2025-11-08 00:18:44.416 [INFO][4529] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" iface="eth0" netns="/var/run/netns/cni-115426dc-9bfd-6b50-95c2-4027bb21adef" Nov 8 00:18:44.465136 containerd[1594]: 2025-11-08 00:18:44.416 [INFO][4529] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Nov 8 00:18:44.465136 containerd[1594]: 2025-11-08 00:18:44.416 [INFO][4529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Nov 8 00:18:44.465136 containerd[1594]: 2025-11-08 00:18:44.449 [INFO][4547] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" HandleID="k8s-pod-network.8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Workload="localhost-k8s-goldmane--666569f655--9dm45-eth0" Nov 8 00:18:44.465136 containerd[1594]: 2025-11-08 00:18:44.450 [INFO][4547] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:44.465136 containerd[1594]: 2025-11-08 00:18:44.450 [INFO][4547] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:44.465136 containerd[1594]: 2025-11-08 00:18:44.456 [WARNING][4547] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" HandleID="k8s-pod-network.8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Workload="localhost-k8s-goldmane--666569f655--9dm45-eth0" Nov 8 00:18:44.465136 containerd[1594]: 2025-11-08 00:18:44.457 [INFO][4547] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" HandleID="k8s-pod-network.8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Workload="localhost-k8s-goldmane--666569f655--9dm45-eth0" Nov 8 00:18:44.465136 containerd[1594]: 2025-11-08 00:18:44.458 [INFO][4547] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:44.465136 containerd[1594]: 2025-11-08 00:18:44.461 [INFO][4529] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Nov 8 00:18:44.467188 containerd[1594]: time="2025-11-08T00:18:44.465340656Z" level=info msg="TearDown network for sandbox \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\" successfully" Nov 8 00:18:44.467188 containerd[1594]: time="2025-11-08T00:18:44.465372136Z" level=info msg="StopPodSandbox for \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\" returns successfully" Nov 8 00:18:44.467188 containerd[1594]: time="2025-11-08T00:18:44.467157013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9dm45,Uid:ba4e3da5-1f7c-4476-a748-4d008501b030,Namespace:calico-system,Attempt:1,}" Nov 8 00:18:44.470270 systemd[1]: run-netns-cni\x2d115426dc\x2d9bfd\x2d6b50\x2d95c2\x2d4027bb21adef.mount: Deactivated successfully. Nov 8 00:18:44.476071 containerd[1594]: 2025-11-08 00:18:44.423 [INFO][4520] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Nov 8 00:18:44.476071 containerd[1594]: 2025-11-08 00:18:44.423 [INFO][4520] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" iface="eth0" netns="/var/run/netns/cni-87aa6fac-6823-3f90-e2b8-7c2cee6827fb" Nov 8 00:18:44.476071 containerd[1594]: 2025-11-08 00:18:44.424 [INFO][4520] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" iface="eth0" netns="/var/run/netns/cni-87aa6fac-6823-3f90-e2b8-7c2cee6827fb" Nov 8 00:18:44.476071 containerd[1594]: 2025-11-08 00:18:44.424 [INFO][4520] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" iface="eth0" netns="/var/run/netns/cni-87aa6fac-6823-3f90-e2b8-7c2cee6827fb" Nov 8 00:18:44.476071 containerd[1594]: 2025-11-08 00:18:44.424 [INFO][4520] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Nov 8 00:18:44.476071 containerd[1594]: 2025-11-08 00:18:44.424 [INFO][4520] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Nov 8 00:18:44.476071 containerd[1594]: 2025-11-08 00:18:44.460 [INFO][4553] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" HandleID="k8s-pod-network.c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Workload="localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0" Nov 8 00:18:44.476071 containerd[1594]: 2025-11-08 00:18:44.460 [INFO][4553] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:44.476071 containerd[1594]: 2025-11-08 00:18:44.461 [INFO][4553] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:44.476071 containerd[1594]: 2025-11-08 00:18:44.468 [WARNING][4553] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" HandleID="k8s-pod-network.c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Workload="localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0" Nov 8 00:18:44.476071 containerd[1594]: 2025-11-08 00:18:44.468 [INFO][4553] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" HandleID="k8s-pod-network.c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Workload="localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0" Nov 8 00:18:44.476071 containerd[1594]: 2025-11-08 00:18:44.470 [INFO][4553] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:44.476071 containerd[1594]: 2025-11-08 00:18:44.473 [INFO][4520] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Nov 8 00:18:44.480546 systemd[1]: run-netns-cni\x2d87aa6fac\x2d6823\x2d3f90\x2de2b8\x2d7c2cee6827fb.mount: Deactivated successfully. Nov 8 00:18:44.481662 containerd[1594]: time="2025-11-08T00:18:44.481619812Z" level=info msg="TearDown network for sandbox \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\" successfully" Nov 8 00:18:44.481788 containerd[1594]: time="2025-11-08T00:18:44.481663656Z" level=info msg="StopPodSandbox for \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\" returns successfully" Nov 8 00:18:44.482161 kubelet[2667]: E1108 00:18:44.482127 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:44.482695 containerd[1594]: time="2025-11-08T00:18:44.482592014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xkvq8,Uid:83b4c340-630f-41a2-8c28-f5f9998eb1d0,Namespace:kube-system,Attempt:1,}" Nov 8 00:18:44.486035 containerd[1594]: 2025-11-08 00:18:44.428 [INFO][4524] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Nov 8 00:18:44.486035 containerd[1594]: 2025-11-08 00:18:44.429 [INFO][4524] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" iface="eth0" netns="/var/run/netns/cni-2da2aa35-1d5e-51d3-90a2-0ef40a9cfa94" Nov 8 00:18:44.486035 containerd[1594]: 2025-11-08 00:18:44.429 [INFO][4524] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" iface="eth0" netns="/var/run/netns/cni-2da2aa35-1d5e-51d3-90a2-0ef40a9cfa94" Nov 8 00:18:44.486035 containerd[1594]: 2025-11-08 00:18:44.429 [INFO][4524] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" iface="eth0" netns="/var/run/netns/cni-2da2aa35-1d5e-51d3-90a2-0ef40a9cfa94" Nov 8 00:18:44.486035 containerd[1594]: 2025-11-08 00:18:44.429 [INFO][4524] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Nov 8 00:18:44.486035 containerd[1594]: 2025-11-08 00:18:44.429 [INFO][4524] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Nov 8 00:18:44.486035 containerd[1594]: 2025-11-08 00:18:44.469 [INFO][4558] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" HandleID="k8s-pod-network.445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Workload="localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0" Nov 8 00:18:44.486035 containerd[1594]: 2025-11-08 00:18:44.469 [INFO][4558] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:44.486035 containerd[1594]: 2025-11-08 00:18:44.470 [INFO][4558] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:44.486035 containerd[1594]: 2025-11-08 00:18:44.475 [WARNING][4558] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" HandleID="k8s-pod-network.445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Workload="localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0" Nov 8 00:18:44.486035 containerd[1594]: 2025-11-08 00:18:44.475 [INFO][4558] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" HandleID="k8s-pod-network.445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Workload="localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0" Nov 8 00:18:44.486035 containerd[1594]: 2025-11-08 00:18:44.477 [INFO][4558] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:44.486035 containerd[1594]: 2025-11-08 00:18:44.482 [INFO][4524] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Nov 8 00:18:44.488363 containerd[1594]: time="2025-11-08T00:18:44.488200670Z" level=info msg="TearDown network for sandbox \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\" successfully" Nov 8 00:18:44.488363 containerd[1594]: time="2025-11-08T00:18:44.488251779Z" level=info msg="StopPodSandbox for \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\" returns successfully" Nov 8 00:18:44.489025 containerd[1594]: time="2025-11-08T00:18:44.488960492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4d75b794-d277s,Uid:4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:18:44.489314 systemd[1]: run-netns-cni\x2d2da2aa35\x2d1d5e\x2d51d3\x2d90a2\x2d0ef40a9cfa94.mount: Deactivated successfully. Nov 8 00:18:44.594997 kubelet[2667]: E1108 00:18:44.594960 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:44.687067 kubelet[2667]: I1108 00:18:44.686746 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-z9rcx" podStartSLOduration=37.686729733 podStartE2EDuration="37.686729733s" podCreationTimestamp="2025-11-08 00:18:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:18:44.68617789 +0000 UTC m=+43.410570172" watchObservedRunningTime="2025-11-08 00:18:44.686729733 +0000 UTC m=+43.411122015" Nov 8 00:18:44.718171 systemd-networkd[1256]: calie96ab54eabd: Gained IPv6LL Nov 8 00:18:44.837512 systemd-networkd[1256]: cali83ad33c7689: Link UP Nov 8 00:18:44.838444 systemd-networkd[1256]: cali83ad33c7689: Gained carrier Nov 8 00:18:44.857972 containerd[1594]: 2025-11-08 00:18:44.753 [INFO][4573] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--9dm45-eth0 goldmane-666569f655- calico-system ba4e3da5-1f7c-4476-a748-4d008501b030 1051 0 2025-11-08 00:18:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-9dm45 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali83ad33c7689 [] [] }} ContainerID="6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81" Namespace="calico-system" Pod="goldmane-666569f655-9dm45" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9dm45-" Nov 8 00:18:44.857972 containerd[1594]: 2025-11-08 00:18:44.753 [INFO][4573] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81" Namespace="calico-system" Pod="goldmane-666569f655-9dm45" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9dm45-eth0" Nov 8 00:18:44.857972 containerd[1594]: 2025-11-08 00:18:44.792 [INFO][4616] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81" HandleID="k8s-pod-network.6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81" Workload="localhost-k8s-goldmane--666569f655--9dm45-eth0" Nov 8 00:18:44.857972 containerd[1594]: 2025-11-08 00:18:44.793 [INFO][4616] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81" HandleID="k8s-pod-network.6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81" Workload="localhost-k8s-goldmane--666569f655--9dm45-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df650), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-9dm45", "timestamp":"2025-11-08 00:18:44.792718894 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:18:44.857972 containerd[1594]: 2025-11-08 00:18:44.793 [INFO][4616] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:44.857972 containerd[1594]: 2025-11-08 00:18:44.793 [INFO][4616] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:44.857972 containerd[1594]: 2025-11-08 00:18:44.793 [INFO][4616] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:18:44.857972 containerd[1594]: 2025-11-08 00:18:44.800 [INFO][4616] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81" host="localhost" Nov 8 00:18:44.857972 containerd[1594]: 2025-11-08 00:18:44.807 [INFO][4616] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:18:44.857972 containerd[1594]: 2025-11-08 00:18:44.813 [INFO][4616] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:18:44.857972 containerd[1594]: 2025-11-08 00:18:44.815 [INFO][4616] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:44.857972 containerd[1594]: 2025-11-08 00:18:44.817 [INFO][4616] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:44.857972 containerd[1594]: 2025-11-08 00:18:44.817 [INFO][4616] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81" host="localhost" Nov 8 00:18:44.857972 containerd[1594]: 2025-11-08 00:18:44.818 [INFO][4616] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81 Nov 8 00:18:44.857972 containerd[1594]: 2025-11-08 00:18:44.823 [INFO][4616] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81" host="localhost" Nov 8 00:18:44.857972 containerd[1594]: 2025-11-08 00:18:44.830 [INFO][4616] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81" host="localhost" Nov 8 00:18:44.857972 containerd[1594]: 2025-11-08 00:18:44.830 [INFO][4616] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81" host="localhost" Nov 8 00:18:44.857972 containerd[1594]: 2025-11-08 00:18:44.830 [INFO][4616] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:44.857972 containerd[1594]: 2025-11-08 00:18:44.830 [INFO][4616] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81" HandleID="k8s-pod-network.6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81" Workload="localhost-k8s-goldmane--666569f655--9dm45-eth0" Nov 8 00:18:44.858629 containerd[1594]: 2025-11-08 00:18:44.833 [INFO][4573] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81" Namespace="calico-system" Pod="goldmane-666569f655-9dm45" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9dm45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--9dm45-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ba4e3da5-1f7c-4476-a748-4d008501b030", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-9dm45", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali83ad33c7689", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:44.858629 containerd[1594]: 2025-11-08 00:18:44.833 [INFO][4573] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81" Namespace="calico-system" Pod="goldmane-666569f655-9dm45" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9dm45-eth0" Nov 8 00:18:44.858629 containerd[1594]: 2025-11-08 00:18:44.833 [INFO][4573] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83ad33c7689 ContainerID="6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81" Namespace="calico-system" Pod="goldmane-666569f655-9dm45" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9dm45-eth0" Nov 8 00:18:44.858629 containerd[1594]: 2025-11-08 00:18:44.838 [INFO][4573] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81" Namespace="calico-system" Pod="goldmane-666569f655-9dm45" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9dm45-eth0" Nov 8 00:18:44.858629 containerd[1594]: 2025-11-08 00:18:44.840 [INFO][4573] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81" Namespace="calico-system" Pod="goldmane-666569f655-9dm45" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9dm45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--9dm45-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ba4e3da5-1f7c-4476-a748-4d008501b030", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81", Pod:"goldmane-666569f655-9dm45", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali83ad33c7689", MAC:"fa:e7:8d:b7:cc:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:44.858629 containerd[1594]: 2025-11-08 00:18:44.854 [INFO][4573] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81" Namespace="calico-system" Pod="goldmane-666569f655-9dm45" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9dm45-eth0" Nov 8 00:18:44.881397 containerd[1594]: time="2025-11-08T00:18:44.881288403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:18:44.881397 containerd[1594]: time="2025-11-08T00:18:44.881341846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:18:44.881397 containerd[1594]: time="2025-11-08T00:18:44.881351715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:44.881677 containerd[1594]: time="2025-11-08T00:18:44.881437391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:44.911241 systemd-resolved[1477]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:18:44.940265 systemd-networkd[1256]: cali69a764a1b8b: Link UP Nov 8 00:18:44.940531 systemd-networkd[1256]: cali69a764a1b8b: Gained carrier Nov 8 00:18:44.955149 containerd[1594]: time="2025-11-08T00:18:44.955105155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9dm45,Uid:ba4e3da5-1f7c-4476-a748-4d008501b030,Namespace:calico-system,Attempt:1,} returns sandbox id \"6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81\"" Nov 8 00:18:44.955609 containerd[1594]: 2025-11-08 00:18:44.763 [INFO][4583] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0 coredns-668d6bf9bc- kube-system 83b4c340-630f-41a2-8c28-f5f9998eb1d0 1052 0 2025-11-08 00:18:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-xkvq8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali69a764a1b8b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104" Namespace="kube-system" Pod="coredns-668d6bf9bc-xkvq8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xkvq8-" Nov 8 00:18:44.955609 containerd[1594]: 2025-11-08 00:18:44.763 [INFO][4583] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104" Namespace="kube-system" Pod="coredns-668d6bf9bc-xkvq8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0" Nov 8 00:18:44.955609 containerd[1594]: 2025-11-08 00:18:44.802 [INFO][4623] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104" HandleID="k8s-pod-network.1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104" Workload="localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0" Nov 8 00:18:44.955609 containerd[1594]: 2025-11-08 00:18:44.802 [INFO][4623] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104" HandleID="k8s-pod-network.1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104" Workload="localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001394b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-xkvq8", "timestamp":"2025-11-08 00:18:44.802393236 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:18:44.955609 containerd[1594]: 2025-11-08 00:18:44.802 [INFO][4623] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:44.955609 containerd[1594]: 2025-11-08 00:18:44.831 [INFO][4623] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:44.955609 containerd[1594]: 2025-11-08 00:18:44.831 [INFO][4623] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:18:44.955609 containerd[1594]: 2025-11-08 00:18:44.901 [INFO][4623] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104" host="localhost" Nov 8 00:18:44.955609 containerd[1594]: 2025-11-08 00:18:44.911 [INFO][4623] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:18:44.955609 containerd[1594]: 2025-11-08 00:18:44.915 [INFO][4623] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:18:44.955609 containerd[1594]: 2025-11-08 00:18:44.917 [INFO][4623] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:44.955609 containerd[1594]: 2025-11-08 00:18:44.919 [INFO][4623] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:44.955609 containerd[1594]: 2025-11-08 00:18:44.919 [INFO][4623] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104" host="localhost" Nov 8 00:18:44.955609 containerd[1594]: 2025-11-08 00:18:44.920 [INFO][4623] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104 Nov 8 00:18:44.955609 containerd[1594]: 2025-11-08 00:18:44.924 [INFO][4623] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104" host="localhost" Nov 8 00:18:44.955609 containerd[1594]: 2025-11-08 00:18:44.930 [INFO][4623] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104" host="localhost" Nov 8 00:18:44.955609 containerd[1594]: 2025-11-08 00:18:44.930 [INFO][4623] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104" host="localhost" Nov 8 00:18:44.955609 containerd[1594]: 2025-11-08 00:18:44.930 [INFO][4623] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:44.955609 containerd[1594]: 2025-11-08 00:18:44.930 [INFO][4623] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104" HandleID="k8s-pod-network.1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104" Workload="localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0" Nov 8 00:18:44.956116 containerd[1594]: 2025-11-08 00:18:44.935 [INFO][4583] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104" Namespace="kube-system" Pod="coredns-668d6bf9bc-xkvq8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"83b4c340-630f-41a2-8c28-f5f9998eb1d0", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-xkvq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69a764a1b8b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:44.956116 containerd[1594]: 2025-11-08 00:18:44.936 [INFO][4583] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104" Namespace="kube-system" Pod="coredns-668d6bf9bc-xkvq8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0" Nov 8 00:18:44.956116 containerd[1594]: 2025-11-08 00:18:44.936 [INFO][4583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69a764a1b8b ContainerID="1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104" Namespace="kube-system" Pod="coredns-668d6bf9bc-xkvq8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0" Nov 8 00:18:44.956116 containerd[1594]: 2025-11-08 00:18:44.939 [INFO][4583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104" Namespace="kube-system" Pod="coredns-668d6bf9bc-xkvq8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0" Nov 8 00:18:44.956116 containerd[1594]: 2025-11-08 00:18:44.939 [INFO][4583] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104" Namespace="kube-system" Pod="coredns-668d6bf9bc-xkvq8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"83b4c340-630f-41a2-8c28-f5f9998eb1d0", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104", Pod:"coredns-668d6bf9bc-xkvq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69a764a1b8b", MAC:"2e:62:43:0d:a9:59", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:44.956116 containerd[1594]: 2025-11-08 00:18:44.951 [INFO][4583] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104" Namespace="kube-system" Pod="coredns-668d6bf9bc-xkvq8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0" Nov 8 00:18:44.957243 containerd[1594]: time="2025-11-08T00:18:44.957025321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:18:44.981354 containerd[1594]: time="2025-11-08T00:18:44.981239402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:18:44.981354 containerd[1594]: time="2025-11-08T00:18:44.981309366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:18:44.981354 containerd[1594]: time="2025-11-08T00:18:44.981344714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:44.981546 containerd[1594]: time="2025-11-08T00:18:44.981485325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:45.007076 systemd-resolved[1477]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:18:45.039826 systemd-networkd[1256]: cali9a8e85911e3: Link UP Nov 8 00:18:45.040049 systemd-networkd[1256]: cali9a8e85911e3: Gained carrier Nov 8 00:18:45.042439 containerd[1594]: time="2025-11-08T00:18:45.042273624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xkvq8,Uid:83b4c340-630f-41a2-8c28-f5f9998eb1d0,Namespace:kube-system,Attempt:1,} returns sandbox id \"1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104\"" Nov 8 00:18:45.043398 kubelet[2667]: E1108 00:18:45.043121 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:45.048675 containerd[1594]: time="2025-11-08T00:18:45.048632068Z" level=info msg="CreateContainer within sandbox \"1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:18:45.059223 containerd[1594]: 2025-11-08 00:18:44.780 [INFO][4594] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0 calico-apiserver-7b4d75b794- calico-apiserver 4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d 1053 0 2025-11-08 00:18:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b4d75b794 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7b4d75b794-d277s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9a8e85911e3 [] [] }} ContainerID="700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21" Namespace="calico-apiserver" Pod="calico-apiserver-7b4d75b794-d277s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b4d75b794--d277s-" Nov 8 00:18:45.059223 containerd[1594]: 2025-11-08 00:18:44.780 [INFO][4594] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21" Namespace="calico-apiserver" Pod="calico-apiserver-7b4d75b794-d277s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0" Nov 8 00:18:45.059223 containerd[1594]: 2025-11-08 00:18:44.820 [INFO][4632] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21" HandleID="k8s-pod-network.700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21" Workload="localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0" Nov 8 00:18:45.059223 containerd[1594]: 2025-11-08 00:18:44.821 [INFO][4632] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21" HandleID="k8s-pod-network.700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21" Workload="localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139a70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7b4d75b794-d277s", "timestamp":"2025-11-08 00:18:44.820983952 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:18:45.059223 containerd[1594]: 2025-11-08 00:18:44.821 [INFO][4632] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:45.059223 containerd[1594]: 2025-11-08 00:18:44.930 [INFO][4632] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:45.059223 containerd[1594]: 2025-11-08 00:18:44.931 [INFO][4632] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:18:45.059223 containerd[1594]: 2025-11-08 00:18:45.002 [INFO][4632] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21" host="localhost" Nov 8 00:18:45.059223 containerd[1594]: 2025-11-08 00:18:45.010 [INFO][4632] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:18:45.059223 containerd[1594]: 2025-11-08 00:18:45.015 [INFO][4632] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:18:45.059223 containerd[1594]: 2025-11-08 00:18:45.017 [INFO][4632] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:45.059223 containerd[1594]: 2025-11-08 00:18:45.018 [INFO][4632] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:45.059223 containerd[1594]: 2025-11-08 00:18:45.018 [INFO][4632] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21" host="localhost" Nov 8 00:18:45.059223 containerd[1594]: 2025-11-08 00:18:45.021 [INFO][4632] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21 Nov 8 00:18:45.059223 containerd[1594]: 2025-11-08 00:18:45.024 [INFO][4632] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21" host="localhost" Nov 8 00:18:45.059223 containerd[1594]: 2025-11-08 00:18:45.031 [INFO][4632] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21" host="localhost" Nov 8 00:18:45.059223 containerd[1594]: 2025-11-08 00:18:45.031 [INFO][4632] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21" host="localhost" Nov 8 00:18:45.059223 containerd[1594]: 2025-11-08 00:18:45.031 [INFO][4632] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:45.059223 containerd[1594]: 2025-11-08 00:18:45.031 [INFO][4632] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21" HandleID="k8s-pod-network.700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21" Workload="localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0" Nov 8 00:18:45.059785 containerd[1594]: 2025-11-08 00:18:45.035 [INFO][4594] cni-plugin/k8s.go 418: Populated endpoint ContainerID="700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21" Namespace="calico-apiserver" Pod="calico-apiserver-7b4d75b794-d277s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0", GenerateName:"calico-apiserver-7b4d75b794-", Namespace:"calico-apiserver", SelfLink:"", UID:"4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4d75b794", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b4d75b794-d277s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9a8e85911e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:45.059785 containerd[1594]: 2025-11-08 00:18:45.035 [INFO][4594] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21" Namespace="calico-apiserver" Pod="calico-apiserver-7b4d75b794-d277s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0" Nov 8 00:18:45.059785 containerd[1594]: 2025-11-08 00:18:45.035 [INFO][4594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a8e85911e3 ContainerID="700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21" Namespace="calico-apiserver" Pod="calico-apiserver-7b4d75b794-d277s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0" Nov 8 00:18:45.059785 containerd[1594]: 2025-11-08 00:18:45.039 [INFO][4594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21" Namespace="calico-apiserver" Pod="calico-apiserver-7b4d75b794-d277s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0" Nov 8 00:18:45.059785 containerd[1594]: 2025-11-08 00:18:45.040 [INFO][4594] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21" Namespace="calico-apiserver" Pod="calico-apiserver-7b4d75b794-d277s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0", GenerateName:"calico-apiserver-7b4d75b794-", Namespace:"calico-apiserver", SelfLink:"", UID:"4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4d75b794", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21", Pod:"calico-apiserver-7b4d75b794-d277s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9a8e85911e3", MAC:"2e:06:8b:3e:ce:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:45.059785 containerd[1594]: 2025-11-08 00:18:45.055 [INFO][4594] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21" Namespace="calico-apiserver" Pod="calico-apiserver-7b4d75b794-d277s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0" Nov 8 00:18:45.071115 containerd[1594]: time="2025-11-08T00:18:45.071065687Z" level=info msg="CreateContainer within sandbox \"1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"93e16b7b2c5e454fb810e0ff6fd7b50069d9b581781bc4f7b20457f3b07636e7\"" Nov 8 00:18:45.072325 containerd[1594]: time="2025-11-08T00:18:45.072292687Z" level=info msg="StartContainer for \"93e16b7b2c5e454fb810e0ff6fd7b50069d9b581781bc4f7b20457f3b07636e7\"" Nov 8 00:18:45.088614 containerd[1594]: time="2025-11-08T00:18:45.086324554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:18:45.088614 containerd[1594]: time="2025-11-08T00:18:45.086389889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:18:45.088614 containerd[1594]: time="2025-11-08T00:18:45.086403936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:45.088614 containerd[1594]: time="2025-11-08T00:18:45.086531021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:45.120935 systemd-resolved[1477]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:18:45.139816 containerd[1594]: time="2025-11-08T00:18:45.139620840Z" level=info msg="StartContainer for \"93e16b7b2c5e454fb810e0ff6fd7b50069d9b581781bc4f7b20457f3b07636e7\" returns successfully" Nov 8 00:18:45.157722 containerd[1594]: time="2025-11-08T00:18:45.157668631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4d75b794-d277s,Uid:4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21\"" Nov 8 00:18:45.456084 containerd[1594]: time="2025-11-08T00:18:45.456033971Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:45.457275 containerd[1594]: time="2025-11-08T00:18:45.457236545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:18:45.457350 containerd[1594]: time="2025-11-08T00:18:45.457277944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:18:45.457528 kubelet[2667]: E1108 00:18:45.457475 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:18:45.457582 kubelet[2667]: E1108 00:18:45.457530 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:18:45.457912 kubelet[2667]: E1108 00:18:45.457833 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qcvzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9dm45_calico-system(ba4e3da5-1f7c-4476-a748-4d008501b030): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:45.458186 containerd[1594]: time="2025-11-08T00:18:45.457888088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:18:45.459111 kubelet[2667]: E1108 00:18:45.459066 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9dm45" podUID="ba4e3da5-1f7c-4476-a748-4d008501b030" Nov 8 00:18:45.599404 kubelet[2667]: E1108 00:18:45.599345 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9dm45" podUID="ba4e3da5-1f7c-4476-a748-4d008501b030" Nov 8 00:18:45.600566 kubelet[2667]: E1108 00:18:45.600530 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:45.600756 kubelet[2667]: E1108 00:18:45.600624 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:45.795486 containerd[1594]: time="2025-11-08T00:18:45.795424119Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:45.857520 containerd[1594]: time="2025-11-08T00:18:45.857460981Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:18:45.857520 containerd[1594]: time="2025-11-08T00:18:45.857505947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:18:45.857742 kubelet[2667]: E1108 00:18:45.857663 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:45.857742 kubelet[2667]: E1108 00:18:45.857710 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:45.857948 kubelet[2667]: E1108 00:18:45.857905 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98lh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b4d75b794-d277s_calico-apiserver(4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:45.859100 kubelet[2667]: E1108 00:18:45.859063 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b4d75b794-d277s" podUID="4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d" Nov 8 00:18:45.980133 systemd[1]: Started sshd@8-10.0.0.26:22-10.0.0.1:43426.service - OpenSSH per-connection server daemon (10.0.0.1:43426). Nov 8 00:18:46.022820 sshd[4837]: Accepted publickey for core from 10.0.0.1 port 43426 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:18:46.024924 sshd[4837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:18:46.031419 systemd-logind[1568]: New session 9 of user core. Nov 8 00:18:46.036359 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:18:46.165733 sshd[4837]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:46.171596 systemd[1]: sshd@8-10.0.0.26:22-10.0.0.1:43426.service: Deactivated successfully. Nov 8 00:18:46.174313 systemd-logind[1568]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:18:46.174324 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:18:46.176400 systemd-logind[1568]: Removed session 9. Nov 8 00:18:46.381209 systemd-networkd[1256]: cali69a764a1b8b: Gained IPv6LL Nov 8 00:18:46.603817 kubelet[2667]: E1108 00:18:46.603138 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:46.603817 kubelet[2667]: E1108 00:18:46.603619 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:46.604382 kubelet[2667]: E1108 00:18:46.604205 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9dm45" podUID="ba4e3da5-1f7c-4476-a748-4d008501b030" Nov 8 00:18:46.604382 kubelet[2667]: E1108 00:18:46.604289 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b4d75b794-d277s" podUID="4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d" Nov 8 00:18:46.630427 kubelet[2667]: I1108 00:18:46.630347 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xkvq8" podStartSLOduration=39.630326081 podStartE2EDuration="39.630326081s" podCreationTimestamp="2025-11-08 00:18:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:18:45.622960148 +0000 UTC m=+44.347352420" watchObservedRunningTime="2025-11-08 00:18:46.630326081 +0000 UTC m=+45.354718363" Nov 8 00:18:46.637161 systemd-networkd[1256]: cali9a8e85911e3: Gained IPv6LL Nov 8 00:18:46.829095 systemd-networkd[1256]: cali83ad33c7689: Gained IPv6LL Nov 8 00:18:47.355928 containerd[1594]: time="2025-11-08T00:18:47.355619814Z" level=info msg="StopPodSandbox for \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\"" Nov 8 00:18:47.356391 containerd[1594]: time="2025-11-08T00:18:47.356007800Z" level=info msg="StopPodSandbox for \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\"" Nov 8 00:18:47.356391 containerd[1594]: time="2025-11-08T00:18:47.356260876Z" level=info msg="StopPodSandbox for \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\"" Nov 8 00:18:47.507917 containerd[1594]: 2025-11-08 00:18:47.451 [INFO][4893] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Nov 8 00:18:47.507917 containerd[1594]: 2025-11-08 00:18:47.452 [INFO][4893] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" iface="eth0" netns="/var/run/netns/cni-a85afe87-91ce-743c-3352-0b042556b6d1" Nov 8 00:18:47.507917 containerd[1594]: 2025-11-08 00:18:47.452 [INFO][4893] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" iface="eth0" netns="/var/run/netns/cni-a85afe87-91ce-743c-3352-0b042556b6d1" Nov 8 00:18:47.507917 containerd[1594]: 2025-11-08 00:18:47.452 [INFO][4893] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" iface="eth0" netns="/var/run/netns/cni-a85afe87-91ce-743c-3352-0b042556b6d1" Nov 8 00:18:47.507917 containerd[1594]: 2025-11-08 00:18:47.452 [INFO][4893] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Nov 8 00:18:47.507917 containerd[1594]: 2025-11-08 00:18:47.452 [INFO][4893] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Nov 8 00:18:47.507917 containerd[1594]: 2025-11-08 00:18:47.492 [INFO][4928] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" HandleID="k8s-pod-network.a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Workload="localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0" Nov 8 00:18:47.507917 containerd[1594]: 2025-11-08 00:18:47.492 [INFO][4928] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:47.507917 containerd[1594]: 2025-11-08 00:18:47.493 [INFO][4928] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:47.507917 containerd[1594]: 2025-11-08 00:18:47.499 [WARNING][4928] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" HandleID="k8s-pod-network.a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Workload="localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0" Nov 8 00:18:47.507917 containerd[1594]: 2025-11-08 00:18:47.499 [INFO][4928] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" HandleID="k8s-pod-network.a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Workload="localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0" Nov 8 00:18:47.507917 containerd[1594]: 2025-11-08 00:18:47.501 [INFO][4928] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:47.507917 containerd[1594]: 2025-11-08 00:18:47.505 [INFO][4893] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Nov 8 00:18:47.510529 containerd[1594]: 2025-11-08 00:18:47.442 [INFO][4901] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Nov 8 00:18:47.510529 containerd[1594]: 2025-11-08 00:18:47.442 [INFO][4901] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" iface="eth0" netns="/var/run/netns/cni-2092aa01-f835-98a0-a329-f80ff7565136" Nov 8 00:18:47.510529 containerd[1594]: 2025-11-08 00:18:47.443 [INFO][4901] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" iface="eth0" netns="/var/run/netns/cni-2092aa01-f835-98a0-a329-f80ff7565136" Nov 8 00:18:47.510529 containerd[1594]: 2025-11-08 00:18:47.443 [INFO][4901] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" iface="eth0" netns="/var/run/netns/cni-2092aa01-f835-98a0-a329-f80ff7565136" Nov 8 00:18:47.510529 containerd[1594]: 2025-11-08 00:18:47.443 [INFO][4901] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Nov 8 00:18:47.510529 containerd[1594]: 2025-11-08 00:18:47.443 [INFO][4901] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Nov 8 00:18:47.510529 containerd[1594]: 2025-11-08 00:18:47.485 [INFO][4923] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" HandleID="k8s-pod-network.455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Workload="localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0" Nov 8 00:18:47.510529 containerd[1594]: 2025-11-08 00:18:47.485 [INFO][4923] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:47.510529 containerd[1594]: 2025-11-08 00:18:47.485 [INFO][4923] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:47.510529 containerd[1594]: 2025-11-08 00:18:47.490 [WARNING][4923] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" HandleID="k8s-pod-network.455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Workload="localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0" Nov 8 00:18:47.510529 containerd[1594]: 2025-11-08 00:18:47.491 [INFO][4923] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" HandleID="k8s-pod-network.455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Workload="localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0" Nov 8 00:18:47.510529 containerd[1594]: 2025-11-08 00:18:47.492 [INFO][4923] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:47.510529 containerd[1594]: 2025-11-08 00:18:47.505 [INFO][4901] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Nov 8 00:18:47.511599 containerd[1594]: time="2025-11-08T00:18:47.511547416Z" level=info msg="TearDown network for sandbox \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\" successfully" Nov 8 00:18:47.511599 containerd[1594]: time="2025-11-08T00:18:47.511590399Z" level=info msg="StopPodSandbox for \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\" returns successfully" Nov 8 00:18:47.512202 containerd[1594]: time="2025-11-08T00:18:47.512149754Z" level=info msg="TearDown network for sandbox \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\" successfully" Nov 8 00:18:47.512202 containerd[1594]: time="2025-11-08T00:18:47.512176125Z" level=info msg="StopPodSandbox for \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\" returns successfully" Nov 8 00:18:47.514018 systemd[1]: run-netns-cni\x2da85afe87\x2d91ce\x2d743c\x2d3352\x2d0b042556b6d1.mount: Deactivated successfully. Nov 8 00:18:47.519204 containerd[1594]: time="2025-11-08T00:18:47.514102628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4d75b794-6dvvd,Uid:0357a036-98a8-435c-9d85-9cc2bb4428b4,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:18:47.519204 containerd[1594]: time="2025-11-08T00:18:47.514699445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5497d898d6-c7j84,Uid:54174585-9397-4869-81c3-ea42889b85ce,Namespace:calico-system,Attempt:1,}" Nov 8 00:18:47.518211 systemd[1]: run-netns-cni\x2d2092aa01\x2df835\x2d98a0\x2da329\x2df80ff7565136.mount: Deactivated successfully. Nov 8 00:18:47.522631 containerd[1594]: 2025-11-08 00:18:47.443 [INFO][4892] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Nov 8 00:18:47.522631 containerd[1594]: 2025-11-08 00:18:47.443 [INFO][4892] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" iface="eth0" netns="/var/run/netns/cni-4825e2b1-ba0b-97fe-6160-fcd4d80f0b1a" Nov 8 00:18:47.522631 containerd[1594]: 2025-11-08 00:18:47.444 [INFO][4892] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" iface="eth0" netns="/var/run/netns/cni-4825e2b1-ba0b-97fe-6160-fcd4d80f0b1a" Nov 8 00:18:47.522631 containerd[1594]: 2025-11-08 00:18:47.445 [INFO][4892] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" iface="eth0" netns="/var/run/netns/cni-4825e2b1-ba0b-97fe-6160-fcd4d80f0b1a" Nov 8 00:18:47.522631 containerd[1594]: 2025-11-08 00:18:47.445 [INFO][4892] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Nov 8 00:18:47.522631 containerd[1594]: 2025-11-08 00:18:47.445 [INFO][4892] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Nov 8 00:18:47.522631 containerd[1594]: 2025-11-08 00:18:47.498 [INFO][4917] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" HandleID="k8s-pod-network.db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Workload="localhost-k8s-csi--node--driver--s7fgw-eth0" Nov 8 00:18:47.522631 containerd[1594]: 2025-11-08 00:18:47.499 [INFO][4917] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:47.522631 containerd[1594]: 2025-11-08 00:18:47.501 [INFO][4917] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:47.522631 containerd[1594]: 2025-11-08 00:18:47.507 [WARNING][4917] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" HandleID="k8s-pod-network.db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Workload="localhost-k8s-csi--node--driver--s7fgw-eth0" Nov 8 00:18:47.522631 containerd[1594]: 2025-11-08 00:18:47.507 [INFO][4917] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" HandleID="k8s-pod-network.db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Workload="localhost-k8s-csi--node--driver--s7fgw-eth0" Nov 8 00:18:47.522631 containerd[1594]: 2025-11-08 00:18:47.511 [INFO][4917] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:47.522631 containerd[1594]: 2025-11-08 00:18:47.519 [INFO][4892] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Nov 8 00:18:47.523210 containerd[1594]: time="2025-11-08T00:18:47.522916218Z" level=info msg="TearDown network for sandbox \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\" successfully" Nov 8 00:18:47.523210 containerd[1594]: time="2025-11-08T00:18:47.522952668Z" level=info msg="StopPodSandbox for \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\" returns successfully" Nov 8 00:18:47.523735 containerd[1594]: time="2025-11-08T00:18:47.523705144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s7fgw,Uid:9c633ef5-243d-451b-9c89-0f760540ce13,Namespace:calico-system,Attempt:1,}" Nov 8 00:18:47.526079 systemd[1]: run-netns-cni\x2d4825e2b1\x2dba0b\x2d97fe\x2d6160\x2dfcd4d80f0b1a.mount: Deactivated successfully. Nov 8 00:18:47.608707 kubelet[2667]: E1108 00:18:47.608559 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:47.673940 systemd-networkd[1256]: cali90b09d3505d: Link UP Nov 8 00:18:47.674922 systemd-networkd[1256]: cali90b09d3505d: Gained carrier Nov 8 00:18:47.691226 containerd[1594]: 2025-11-08 00:18:47.588 [INFO][4960] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0 calico-apiserver-7b4d75b794- calico-apiserver 0357a036-98a8-435c-9d85-9cc2bb4428b4 1129 0 2025-11-08 00:18:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b4d75b794 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7b4d75b794-6dvvd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali90b09d3505d [] [] }} ContainerID="e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef" Namespace="calico-apiserver" Pod="calico-apiserver-7b4d75b794-6dvvd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-" Nov 8 00:18:47.691226 containerd[1594]: 2025-11-08 00:18:47.588 [INFO][4960] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef" Namespace="calico-apiserver" Pod="calico-apiserver-7b4d75b794-6dvvd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0" Nov 8 00:18:47.691226 containerd[1594]: 2025-11-08 00:18:47.630 [INFO][4994] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef" HandleID="k8s-pod-network.e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef" Workload="localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0" Nov 8 00:18:47.691226 containerd[1594]: 2025-11-08 00:18:47.631 [INFO][4994] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef" HandleID="k8s-pod-network.e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef" Workload="localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7b4d75b794-6dvvd", "timestamp":"2025-11-08 00:18:47.630717792 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:18:47.691226 containerd[1594]: 2025-11-08 00:18:47.631 [INFO][4994] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:47.691226 containerd[1594]: 2025-11-08 00:18:47.631 [INFO][4994] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:47.691226 containerd[1594]: 2025-11-08 00:18:47.631 [INFO][4994] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:18:47.691226 containerd[1594]: 2025-11-08 00:18:47.636 [INFO][4994] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef" host="localhost" Nov 8 00:18:47.691226 containerd[1594]: 2025-11-08 00:18:47.640 [INFO][4994] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:18:47.691226 containerd[1594]: 2025-11-08 00:18:47.645 [INFO][4994] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:18:47.691226 containerd[1594]: 2025-11-08 00:18:47.651 [INFO][4994] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:47.691226 containerd[1594]: 2025-11-08 00:18:47.654 [INFO][4994] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:47.691226 containerd[1594]: 2025-11-08 00:18:47.654 [INFO][4994] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef" host="localhost" Nov 8 00:18:47.691226 containerd[1594]: 2025-11-08 00:18:47.656 [INFO][4994] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef Nov 8 00:18:47.691226 containerd[1594]: 2025-11-08 00:18:47.659 [INFO][4994] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef" host="localhost" Nov 8 00:18:47.691226 containerd[1594]: 2025-11-08 00:18:47.664 [INFO][4994] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef" host="localhost" Nov 8 00:18:47.691226 containerd[1594]: 2025-11-08 00:18:47.665 [INFO][4994] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef" host="localhost" Nov 8 00:18:47.691226 containerd[1594]: 2025-11-08 00:18:47.665 [INFO][4994] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:47.691226 containerd[1594]: 2025-11-08 00:18:47.665 [INFO][4994] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef" HandleID="k8s-pod-network.e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef" Workload="localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0" Nov 8 00:18:47.691822 containerd[1594]: 2025-11-08 00:18:47.669 [INFO][4960] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef" Namespace="calico-apiserver" Pod="calico-apiserver-7b4d75b794-6dvvd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0", GenerateName:"calico-apiserver-7b4d75b794-", Namespace:"calico-apiserver", SelfLink:"", UID:"0357a036-98a8-435c-9d85-9cc2bb4428b4", ResourceVersion:"1129", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4d75b794", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b4d75b794-6dvvd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90b09d3505d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:47.691822 containerd[1594]: 2025-11-08 00:18:47.669 [INFO][4960] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef" Namespace="calico-apiserver" Pod="calico-apiserver-7b4d75b794-6dvvd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0" Nov 8 00:18:47.691822 containerd[1594]: 2025-11-08 00:18:47.669 [INFO][4960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90b09d3505d ContainerID="e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef" Namespace="calico-apiserver" Pod="calico-apiserver-7b4d75b794-6dvvd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0" Nov 8 00:18:47.691822 containerd[1594]: 2025-11-08 00:18:47.675 [INFO][4960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef" Namespace="calico-apiserver" Pod="calico-apiserver-7b4d75b794-6dvvd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0" Nov 8 00:18:47.691822 containerd[1594]: 2025-11-08 00:18:47.675 [INFO][4960] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef" Namespace="calico-apiserver" Pod="calico-apiserver-7b4d75b794-6dvvd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0", GenerateName:"calico-apiserver-7b4d75b794-", Namespace:"calico-apiserver", SelfLink:"", UID:"0357a036-98a8-435c-9d85-9cc2bb4428b4", ResourceVersion:"1129", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4d75b794", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef", Pod:"calico-apiserver-7b4d75b794-6dvvd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90b09d3505d", MAC:"ba:12:ed:bb:bd:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:47.691822 containerd[1594]: 2025-11-08 00:18:47.685 [INFO][4960] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef" Namespace="calico-apiserver" Pod="calico-apiserver-7b4d75b794-6dvvd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0" Nov 8 00:18:47.711357 containerd[1594]: time="2025-11-08T00:18:47.711244124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:18:47.711357 containerd[1594]: time="2025-11-08T00:18:47.711310311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:18:47.711357 containerd[1594]: time="2025-11-08T00:18:47.711321612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:47.711592 containerd[1594]: time="2025-11-08T00:18:47.711424520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:47.737084 systemd-resolved[1477]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:18:47.765573 containerd[1594]: time="2025-11-08T00:18:47.765513639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4d75b794-6dvvd,Uid:0357a036-98a8-435c-9d85-9cc2bb4428b4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef\"" Nov 8 00:18:47.768899 containerd[1594]: time="2025-11-08T00:18:47.768838470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:18:47.776665 systemd-networkd[1256]: cali5924ccc3fbd: Link UP Nov 8 00:18:47.777808 systemd-networkd[1256]: cali5924ccc3fbd: Gained carrier Nov 8 00:18:47.793132 containerd[1594]: 2025-11-08 00:18:47.586 [INFO][4943] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0 calico-kube-controllers-5497d898d6- calico-system 54174585-9397-4869-81c3-ea42889b85ce 1130 0 2025-11-08 00:18:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5497d898d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5497d898d6-c7j84 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5924ccc3fbd [] [] }} ContainerID="16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf" Namespace="calico-system" Pod="calico-kube-controllers-5497d898d6-c7j84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-" Nov 8 00:18:47.793132 containerd[1594]: 2025-11-08 00:18:47.586 [INFO][4943] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf" Namespace="calico-system" Pod="calico-kube-controllers-5497d898d6-c7j84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0" Nov 8 00:18:47.793132 containerd[1594]: 2025-11-08 00:18:47.650 [INFO][4987] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf" HandleID="k8s-pod-network.16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf" Workload="localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0" Nov 8 00:18:47.793132 containerd[1594]: 2025-11-08 00:18:47.651 [INFO][4987] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf" HandleID="k8s-pod-network.16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf" Workload="localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001857c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5497d898d6-c7j84", "timestamp":"2025-11-08 00:18:47.650479614 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:18:47.793132 containerd[1594]: 2025-11-08 00:18:47.651 [INFO][4987] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:47.793132 containerd[1594]: 2025-11-08 00:18:47.665 [INFO][4987] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:47.793132 containerd[1594]: 2025-11-08 00:18:47.665 [INFO][4987] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:18:47.793132 containerd[1594]: 2025-11-08 00:18:47.738 [INFO][4987] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf" host="localhost" Nov 8 00:18:47.793132 containerd[1594]: 2025-11-08 00:18:47.743 [INFO][4987] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:18:47.793132 containerd[1594]: 2025-11-08 00:18:47.747 [INFO][4987] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:18:47.793132 containerd[1594]: 2025-11-08 00:18:47.749 [INFO][4987] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:47.793132 containerd[1594]: 2025-11-08 00:18:47.751 [INFO][4987] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:47.793132 containerd[1594]: 2025-11-08 00:18:47.751 [INFO][4987] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf" host="localhost" Nov 8 00:18:47.793132 containerd[1594]: 2025-11-08 00:18:47.752 [INFO][4987] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf Nov 8 00:18:47.793132 containerd[1594]: 2025-11-08 00:18:47.756 [INFO][4987] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf" host="localhost" Nov 8 00:18:47.793132 containerd[1594]: 2025-11-08 00:18:47.765 [INFO][4987] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf" host="localhost" Nov 8 00:18:47.793132 containerd[1594]: 2025-11-08 00:18:47.765 [INFO][4987] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf" host="localhost" Nov 8 00:18:47.793132 containerd[1594]: 2025-11-08 00:18:47.765 [INFO][4987] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:47.793132 containerd[1594]: 2025-11-08 00:18:47.765 [INFO][4987] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf" HandleID="k8s-pod-network.16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf" Workload="localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0" Nov 8 00:18:47.793729 containerd[1594]: 2025-11-08 00:18:47.773 [INFO][4943] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf" Namespace="calico-system" Pod="calico-kube-controllers-5497d898d6-c7j84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0", GenerateName:"calico-kube-controllers-5497d898d6-", Namespace:"calico-system", SelfLink:"", UID:"54174585-9397-4869-81c3-ea42889b85ce", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5497d898d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5497d898d6-c7j84", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5924ccc3fbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:47.793729 containerd[1594]: 2025-11-08 00:18:47.773 [INFO][4943] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf" Namespace="calico-system" Pod="calico-kube-controllers-5497d898d6-c7j84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0" Nov 8 00:18:47.793729 containerd[1594]: 2025-11-08 00:18:47.773 [INFO][4943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5924ccc3fbd ContainerID="16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf" Namespace="calico-system" Pod="calico-kube-controllers-5497d898d6-c7j84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0" Nov 8 00:18:47.793729 containerd[1594]: 2025-11-08 00:18:47.778 [INFO][4943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf" Namespace="calico-system" Pod="calico-kube-controllers-5497d898d6-c7j84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0" Nov 8 00:18:47.793729 containerd[1594]: 2025-11-08 00:18:47.779 [INFO][4943] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf" Namespace="calico-system" Pod="calico-kube-controllers-5497d898d6-c7j84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0", GenerateName:"calico-kube-controllers-5497d898d6-", Namespace:"calico-system", SelfLink:"", UID:"54174585-9397-4869-81c3-ea42889b85ce", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5497d898d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf", Pod:"calico-kube-controllers-5497d898d6-c7j84", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5924ccc3fbd", MAC:"9e:b7:8f:5c:ec:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:47.793729 containerd[1594]: 2025-11-08 00:18:47.789 [INFO][4943] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf" Namespace="calico-system" Pod="calico-kube-controllers-5497d898d6-c7j84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0" Nov 8 00:18:47.815836 containerd[1594]: time="2025-11-08T00:18:47.815712021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:18:47.815836 containerd[1594]: time="2025-11-08T00:18:47.815789830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:18:47.815836 containerd[1594]: time="2025-11-08T00:18:47.815811201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:47.816049 containerd[1594]: time="2025-11-08T00:18:47.815962923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:47.851649 systemd-resolved[1477]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:18:47.879272 systemd-networkd[1256]: cali87cbbe7da81: Link UP Nov 8 00:18:47.882908 systemd-networkd[1256]: cali87cbbe7da81: Gained carrier Nov 8 00:18:47.886513 containerd[1594]: time="2025-11-08T00:18:47.886196542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5497d898d6-c7j84,Uid:54174585-9397-4869-81c3-ea42889b85ce,Namespace:calico-system,Attempt:1,} returns sandbox id \"16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf\"" Nov 8 00:18:47.899062 containerd[1594]: 2025-11-08 00:18:47.600 [INFO][4953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--s7fgw-eth0 csi-node-driver- calico-system 9c633ef5-243d-451b-9c89-0f760540ce13 1128 0 2025-11-08 00:18:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-s7fgw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali87cbbe7da81 [] [] }} ContainerID="cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb" Namespace="calico-system" Pod="csi-node-driver-s7fgw" WorkloadEndpoint="localhost-k8s-csi--node--driver--s7fgw-" Nov 8 00:18:47.899062 containerd[1594]: 2025-11-08 00:18:47.600 [INFO][4953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb" Namespace="calico-system" Pod="csi-node-driver-s7fgw" WorkloadEndpoint="localhost-k8s-csi--node--driver--s7fgw-eth0" Nov 8 00:18:47.899062 containerd[1594]: 2025-11-08 00:18:47.656 [INFO][4999] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb" HandleID="k8s-pod-network.cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb" Workload="localhost-k8s-csi--node--driver--s7fgw-eth0" Nov 8 00:18:47.899062 containerd[1594]: 2025-11-08 00:18:47.657 [INFO][4999] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb" HandleID="k8s-pod-network.cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb" Workload="localhost-k8s-csi--node--driver--s7fgw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000503530), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-s7fgw", "timestamp":"2025-11-08 00:18:47.656860989 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:18:47.899062 containerd[1594]: 2025-11-08 00:18:47.657 [INFO][4999] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:47.899062 containerd[1594]: 2025-11-08 00:18:47.765 [INFO][4999] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:47.899062 containerd[1594]: 2025-11-08 00:18:47.765 [INFO][4999] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:18:47.899062 containerd[1594]: 2025-11-08 00:18:47.839 [INFO][4999] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb" host="localhost" Nov 8 00:18:47.899062 containerd[1594]: 2025-11-08 00:18:47.845 [INFO][4999] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:18:47.899062 containerd[1594]: 2025-11-08 00:18:47.849 [INFO][4999] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:18:47.899062 containerd[1594]: 2025-11-08 00:18:47.851 [INFO][4999] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:47.899062 containerd[1594]: 2025-11-08 00:18:47.853 [INFO][4999] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:47.899062 containerd[1594]: 2025-11-08 00:18:47.853 [INFO][4999] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb" host="localhost" Nov 8 00:18:47.899062 containerd[1594]: 2025-11-08 00:18:47.855 [INFO][4999] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb Nov 8 00:18:47.899062 containerd[1594]: 2025-11-08 00:18:47.858 [INFO][4999] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb" host="localhost" Nov 8 00:18:47.899062 containerd[1594]: 2025-11-08 00:18:47.864 [INFO][4999] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb" host="localhost" Nov 8 00:18:47.899062 containerd[1594]: 2025-11-08 00:18:47.864 [INFO][4999] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb" host="localhost" Nov 8 00:18:47.899062 containerd[1594]: 2025-11-08 00:18:47.864 [INFO][4999] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:47.899062 containerd[1594]: 2025-11-08 00:18:47.864 [INFO][4999] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb" HandleID="k8s-pod-network.cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb" Workload="localhost-k8s-csi--node--driver--s7fgw-eth0" Nov 8 00:18:47.899613 containerd[1594]: 2025-11-08 00:18:47.868 [INFO][4953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb" Namespace="calico-system" Pod="csi-node-driver-s7fgw" WorkloadEndpoint="localhost-k8s-csi--node--driver--s7fgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s7fgw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c633ef5-243d-451b-9c89-0f760540ce13", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-s7fgw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87cbbe7da81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:47.899613 containerd[1594]: 2025-11-08 00:18:47.868 [INFO][4953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb" Namespace="calico-system" Pod="csi-node-driver-s7fgw" WorkloadEndpoint="localhost-k8s-csi--node--driver--s7fgw-eth0" Nov 8 00:18:47.899613 containerd[1594]: 2025-11-08 00:18:47.868 [INFO][4953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87cbbe7da81 ContainerID="cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb" Namespace="calico-system" Pod="csi-node-driver-s7fgw" WorkloadEndpoint="localhost-k8s-csi--node--driver--s7fgw-eth0" Nov 8 00:18:47.899613 containerd[1594]: 2025-11-08 00:18:47.883 [INFO][4953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb" Namespace="calico-system" Pod="csi-node-driver-s7fgw" WorkloadEndpoint="localhost-k8s-csi--node--driver--s7fgw-eth0" Nov 8 00:18:47.899613 containerd[1594]: 2025-11-08 00:18:47.884 [INFO][4953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb" Namespace="calico-system" Pod="csi-node-driver-s7fgw" WorkloadEndpoint="localhost-k8s-csi--node--driver--s7fgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s7fgw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c633ef5-243d-451b-9c89-0f760540ce13", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb", Pod:"csi-node-driver-s7fgw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87cbbe7da81", MAC:"8e:04:7b:2b:46:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:47.899613 containerd[1594]: 2025-11-08 00:18:47.895 [INFO][4953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb" Namespace="calico-system" Pod="csi-node-driver-s7fgw" WorkloadEndpoint="localhost-k8s-csi--node--driver--s7fgw-eth0" Nov 8 00:18:47.917336 containerd[1594]: time="2025-11-08T00:18:47.917101802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:18:47.917336 containerd[1594]: time="2025-11-08T00:18:47.917167588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:18:47.917336 containerd[1594]: time="2025-11-08T00:18:47.917182878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:47.917549 containerd[1594]: time="2025-11-08T00:18:47.917351812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:47.943728 systemd-resolved[1477]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:18:47.957433 containerd[1594]: time="2025-11-08T00:18:47.957392700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s7fgw,Uid:9c633ef5-243d-451b-9c89-0f760540ce13,Namespace:calico-system,Attempt:1,} returns sandbox id \"cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb\"" Nov 8 00:18:48.118264 containerd[1594]: time="2025-11-08T00:18:48.118183370Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:48.119336 containerd[1594]: time="2025-11-08T00:18:48.119289585Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:18:48.119477 containerd[1594]: time="2025-11-08T00:18:48.119366292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:18:48.119502 kubelet[2667]: E1108 00:18:48.119466 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:48.119542 kubelet[2667]: E1108 00:18:48.119504 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:48.120080 kubelet[2667]: E1108 00:18:48.119779 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vkbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b4d75b794-6dvvd_calico-apiserver(0357a036-98a8-435c-9d85-9cc2bb4428b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:48.120242 containerd[1594]: time="2025-11-08T00:18:48.119836525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:18:48.120986 kubelet[2667]: E1108 00:18:48.120950 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b4d75b794-6dvvd" podUID="0357a036-98a8-435c-9d85-9cc2bb4428b4" Nov 8 00:18:48.476690 containerd[1594]: time="2025-11-08T00:18:48.476615239Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:48.612223 kubelet[2667]: E1108 00:18:48.612175 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b4d75b794-6dvvd" podUID="0357a036-98a8-435c-9d85-9cc2bb4428b4" Nov 8 00:18:48.625601 containerd[1594]: time="2025-11-08T00:18:48.625542139Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:18:48.625686 containerd[1594]: time="2025-11-08T00:18:48.625625128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:18:48.625805 kubelet[2667]: E1108 00:18:48.625758 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:18:48.625889 kubelet[2667]: E1108 00:18:48.625804 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:18:48.626109 kubelet[2667]: E1108 00:18:48.626043 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmg9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5497d898d6-c7j84_calico-system(54174585-9397-4869-81c3-ea42889b85ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:48.626280 containerd[1594]: time="2025-11-08T00:18:48.626137602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:18:48.628708 kubelet[2667]: E1108 00:18:48.628646 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5497d898d6-c7j84" podUID="54174585-9397-4869-81c3-ea42889b85ce" Nov 8 00:18:48.975351 containerd[1594]: time="2025-11-08T00:18:48.975289293Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:48.976581 containerd[1594]: time="2025-11-08T00:18:48.976548392Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:18:48.976680 containerd[1594]: time="2025-11-08T00:18:48.976595903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:18:48.976858 kubelet[2667]: E1108 00:18:48.976790 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:18:48.976911 kubelet[2667]: E1108 00:18:48.976876 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:18:48.977088 kubelet[2667]: E1108 00:18:48.977042 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6hbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-s7fgw_calico-system(9c633ef5-243d-451b-9c89-0f760540ce13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:48.979304 containerd[1594]: time="2025-11-08T00:18:48.979280180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:18:49.198057 systemd-networkd[1256]: cali87cbbe7da81: Gained IPv6LL Nov 8 00:18:49.261023 systemd-networkd[1256]: cali5924ccc3fbd: Gained IPv6LL Nov 8 00:18:49.345656 containerd[1594]: time="2025-11-08T00:18:49.345591520Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:49.346903 containerd[1594]: time="2025-11-08T00:18:49.346787067Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:18:49.347073 containerd[1594]: time="2025-11-08T00:18:49.346804721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:18:49.347157 kubelet[2667]: E1108 00:18:49.347109 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:18:49.347253 kubelet[2667]: E1108 00:18:49.347168 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:18:49.347364 kubelet[2667]: E1108 00:18:49.347324 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6hbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-s7fgw_calico-system(9c633ef5-243d-451b-9c89-0f760540ce13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:49.348523 kubelet[2667]: E1108 00:18:49.348477 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s7fgw" podUID="9c633ef5-243d-451b-9c89-0f760540ce13" Nov 8 00:18:49.616738 kubelet[2667]: E1108 00:18:49.616372 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b4d75b794-6dvvd" podUID="0357a036-98a8-435c-9d85-9cc2bb4428b4" Nov 8 00:18:49.616738 kubelet[2667]: E1108 00:18:49.616565 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5497d898d6-c7j84" podUID="54174585-9397-4869-81c3-ea42889b85ce" Nov 8 00:18:49.617659 kubelet[2667]: E1108 00:18:49.617613 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s7fgw" podUID="9c633ef5-243d-451b-9c89-0f760540ce13" Nov 8 00:18:49.710094 systemd-networkd[1256]: cali90b09d3505d: Gained IPv6LL Nov 8 00:18:51.178336 systemd[1]: Started sshd@9-10.0.0.26:22-10.0.0.1:43438.service - OpenSSH per-connection server daemon (10.0.0.1:43438). Nov 8 00:18:51.224222 sshd[5170]: Accepted publickey for core from 10.0.0.1 port 43438 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:18:51.226049 sshd[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:18:51.231288 systemd-logind[1568]: New session 10 of user core. Nov 8 00:18:51.239422 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:18:51.371871 sshd[5170]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:51.376657 systemd[1]: sshd@9-10.0.0.26:22-10.0.0.1:43438.service: Deactivated successfully. Nov 8 00:18:51.379449 systemd-logind[1568]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:18:51.379625 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:18:51.380946 systemd-logind[1568]: Removed session 10. Nov 8 00:18:56.391210 systemd[1]: Started sshd@10-10.0.0.26:22-10.0.0.1:37594.service - OpenSSH per-connection server daemon (10.0.0.1:37594). Nov 8 00:18:56.423488 sshd[5189]: Accepted publickey for core from 10.0.0.1 port 37594 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:18:56.425454 sshd[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:18:56.430424 systemd-logind[1568]: New session 11 of user core. Nov 8 00:18:56.446199 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:18:56.563058 sshd[5189]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:56.571114 systemd[1]: Started sshd@11-10.0.0.26:22-10.0.0.1:37598.service - OpenSSH per-connection server daemon (10.0.0.1:37598). Nov 8 00:18:56.571648 systemd[1]: sshd@10-10.0.0.26:22-10.0.0.1:37594.service: Deactivated successfully. Nov 8 00:18:56.573738 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:18:56.575198 systemd-logind[1568]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:18:56.576195 systemd-logind[1568]: Removed session 11. Nov 8 00:18:56.605294 sshd[5203]: Accepted publickey for core from 10.0.0.1 port 37598 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:18:56.606550 sshd[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:18:56.611222 systemd-logind[1568]: New session 12 of user core. Nov 8 00:18:56.617120 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:18:56.819731 sshd[5203]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:56.827202 systemd[1]: Started sshd@12-10.0.0.26:22-10.0.0.1:37612.service - OpenSSH per-connection server daemon (10.0.0.1:37612). Nov 8 00:18:56.827899 systemd[1]: sshd@11-10.0.0.26:22-10.0.0.1:37598.service: Deactivated successfully. Nov 8 00:18:56.830082 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:18:56.831923 systemd-logind[1568]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:18:56.833184 systemd-logind[1568]: Removed session 12. Nov 8 00:18:56.858841 sshd[5216]: Accepted publickey for core from 10.0.0.1 port 37612 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:18:56.860808 sshd[5216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:18:56.865681 systemd-logind[1568]: New session 13 of user core. Nov 8 00:18:56.875221 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:18:57.224499 sshd[5216]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:57.228288 systemd[1]: sshd@12-10.0.0.26:22-10.0.0.1:37612.service: Deactivated successfully. Nov 8 00:18:57.230459 systemd-logind[1568]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:18:57.230525 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:18:57.231425 systemd-logind[1568]: Removed session 13. Nov 8 00:18:57.354635 containerd[1594]: time="2025-11-08T00:18:57.354548112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:18:57.680090 containerd[1594]: time="2025-11-08T00:18:57.679888307Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:57.681662 containerd[1594]: time="2025-11-08T00:18:57.681601158Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:18:57.681771 containerd[1594]: time="2025-11-08T00:18:57.681659449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:18:57.681891 kubelet[2667]: E1108 00:18:57.681813 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:18:57.682335 kubelet[2667]: E1108 00:18:57.681920 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:18:57.682335 kubelet[2667]: E1108 00:18:57.682070 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:27502ee819424dd68f8b3ed29bc94e26,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6k2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f98c486d9-btd2s_calico-system(bcacda57-dec5-4042-b890-adc5f9a1885e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:57.684177 containerd[1594]: time="2025-11-08T00:18:57.684141374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:18:58.044524 containerd[1594]: time="2025-11-08T00:18:58.044459810Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:58.045757 containerd[1594]: time="2025-11-08T00:18:58.045718921Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:18:58.045841 containerd[1594]: time="2025-11-08T00:18:58.045759680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:18:58.046075 kubelet[2667]: E1108 00:18:58.045988 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:18:58.046075 kubelet[2667]: E1108 00:18:58.046065 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:18:58.046272 kubelet[2667]: E1108 00:18:58.046186 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6k2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f98c486d9-btd2s_calico-system(bcacda57-dec5-4042-b890-adc5f9a1885e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:58.047386 kubelet[2667]: E1108 00:18:58.047334 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f98c486d9-btd2s" podUID="bcacda57-dec5-4042-b890-adc5f9a1885e" Nov 8 00:18:58.353864 containerd[1594]: time="2025-11-08T00:18:58.353694126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:18:58.712651 containerd[1594]: time="2025-11-08T00:18:58.712600917Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:58.713743 containerd[1594]: time="2025-11-08T00:18:58.713675155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:18:58.713743 containerd[1594]: time="2025-11-08T00:18:58.713706024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:18:58.714004 kubelet[2667]: E1108 00:18:58.713953 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:18:58.714367 kubelet[2667]: E1108 00:18:58.714024 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:18:58.714367 kubelet[2667]: E1108 00:18:58.714193 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qcvzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9dm45_calico-system(ba4e3da5-1f7c-4476-a748-4d008501b030): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:58.715419 kubelet[2667]: E1108 00:18:58.715357 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9dm45" podUID="ba4e3da5-1f7c-4476-a748-4d008501b030" Nov 8 00:19:00.354966 containerd[1594]: time="2025-11-08T00:19:00.354900890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:19:00.708337 containerd[1594]: time="2025-11-08T00:19:00.708284818Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:19:00.764397 containerd[1594]: time="2025-11-08T00:19:00.764300695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:19:00.764598 containerd[1594]: time="2025-11-08T00:19:00.764341022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:19:00.764703 kubelet[2667]: E1108 00:19:00.764639 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:19:00.765219 kubelet[2667]: E1108 00:19:00.764719 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:19:00.765219 kubelet[2667]: E1108 00:19:00.765031 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6hbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-s7fgw_calico-system(9c633ef5-243d-451b-9c89-0f760540ce13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:19:00.765531 containerd[1594]: time="2025-11-08T00:19:00.765502195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:19:01.202922 containerd[1594]: time="2025-11-08T00:19:01.202754321Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:19:01.253828 containerd[1594]: time="2025-11-08T00:19:01.253763565Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:19:01.254108 containerd[1594]: time="2025-11-08T00:19:01.253882573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:19:01.254441 kubelet[2667]: E1108 00:19:01.254260 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:19:01.254441 kubelet[2667]: E1108 00:19:01.254326 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:19:01.254666 kubelet[2667]: E1108 00:19:01.254560 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vkbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b4d75b794-6dvvd_calico-apiserver(0357a036-98a8-435c-9d85-9cc2bb4428b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:19:01.254862 containerd[1594]: time="2025-11-08T00:19:01.254808495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:19:01.256182 kubelet[2667]: E1108 00:19:01.256126 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b4d75b794-6dvvd" podUID="0357a036-98a8-435c-9d85-9cc2bb4428b4" Nov 8 00:19:01.341790 containerd[1594]: time="2025-11-08T00:19:01.341741973Z" level=info msg="StopPodSandbox for \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\"" Nov 8 00:19:01.414329 containerd[1594]: 2025-11-08 00:19:01.374 [WARNING][5251] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"83b4c340-630f-41a2-8c28-f5f9998eb1d0", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104", Pod:"coredns-668d6bf9bc-xkvq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69a764a1b8b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:19:01.414329 containerd[1594]: 2025-11-08 00:19:01.375 [INFO][5251] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Nov 8 00:19:01.414329 containerd[1594]: 2025-11-08 00:19:01.375 [INFO][5251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" iface="eth0" netns="" Nov 8 00:19:01.414329 containerd[1594]: 2025-11-08 00:19:01.375 [INFO][5251] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Nov 8 00:19:01.414329 containerd[1594]: 2025-11-08 00:19:01.375 [INFO][5251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Nov 8 00:19:01.414329 containerd[1594]: 2025-11-08 00:19:01.398 [INFO][5262] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" HandleID="k8s-pod-network.c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Workload="localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0" Nov 8 00:19:01.414329 containerd[1594]: 2025-11-08 00:19:01.398 [INFO][5262] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:19:01.414329 containerd[1594]: 2025-11-08 00:19:01.398 [INFO][5262] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:19:01.414329 containerd[1594]: 2025-11-08 00:19:01.407 [WARNING][5262] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" HandleID="k8s-pod-network.c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Workload="localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0" Nov 8 00:19:01.414329 containerd[1594]: 2025-11-08 00:19:01.407 [INFO][5262] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" HandleID="k8s-pod-network.c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Workload="localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0" Nov 8 00:19:01.414329 containerd[1594]: 2025-11-08 00:19:01.408 [INFO][5262] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:19:01.414329 containerd[1594]: 2025-11-08 00:19:01.411 [INFO][5251] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Nov 8 00:19:01.415251 containerd[1594]: time="2025-11-08T00:19:01.414357819Z" level=info msg="TearDown network for sandbox \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\" successfully" Nov 8 00:19:01.415251 containerd[1594]: time="2025-11-08T00:19:01.414379320Z" level=info msg="StopPodSandbox for \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\" returns successfully" Nov 8 00:19:01.415251 containerd[1594]: time="2025-11-08T00:19:01.414775137Z" level=info msg="RemovePodSandbox for \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\"" Nov 8 00:19:01.417063 containerd[1594]: time="2025-11-08T00:19:01.417027809Z" level=info msg="Forcibly stopping sandbox \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\"" Nov 8 00:19:01.490248 containerd[1594]: 2025-11-08 00:19:01.453 [WARNING][5280] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"83b4c340-630f-41a2-8c28-f5f9998eb1d0", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1cf7270a49569d3e4c15693fcc62334bf9c0e656d71da4965051cf8fe2a0b104", Pod:"coredns-668d6bf9bc-xkvq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69a764a1b8b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:19:01.490248 containerd[1594]: 2025-11-08 00:19:01.453 [INFO][5280] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Nov 8 00:19:01.490248 containerd[1594]: 2025-11-08 00:19:01.453 [INFO][5280] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" iface="eth0" netns="" Nov 8 00:19:01.490248 containerd[1594]: 2025-11-08 00:19:01.453 [INFO][5280] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Nov 8 00:19:01.490248 containerd[1594]: 2025-11-08 00:19:01.453 [INFO][5280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Nov 8 00:19:01.490248 containerd[1594]: 2025-11-08 00:19:01.476 [INFO][5290] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" HandleID="k8s-pod-network.c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Workload="localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0" Nov 8 00:19:01.490248 containerd[1594]: 2025-11-08 00:19:01.476 [INFO][5290] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:19:01.490248 containerd[1594]: 2025-11-08 00:19:01.476 [INFO][5290] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:19:01.490248 containerd[1594]: 2025-11-08 00:19:01.482 [WARNING][5290] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" HandleID="k8s-pod-network.c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Workload="localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0" Nov 8 00:19:01.490248 containerd[1594]: 2025-11-08 00:19:01.482 [INFO][5290] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" HandleID="k8s-pod-network.c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Workload="localhost-k8s-coredns--668d6bf9bc--xkvq8-eth0" Nov 8 00:19:01.490248 containerd[1594]: 2025-11-08 00:19:01.484 [INFO][5290] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:19:01.490248 containerd[1594]: 2025-11-08 00:19:01.487 [INFO][5280] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c" Nov 8 00:19:01.490724 containerd[1594]: time="2025-11-08T00:19:01.490293308Z" level=info msg="TearDown network for sandbox \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\" successfully" Nov 8 00:19:01.501499 containerd[1594]: time="2025-11-08T00:19:01.501442276Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:19:01.501499 containerd[1594]: time="2025-11-08T00:19:01.501508463Z" level=info msg="RemovePodSandbox \"c541f7fb109335f71887249d6234c725bb6f130d5d7d9f65f6c7a61aa703a21c\" returns successfully" Nov 8 00:19:01.502226 containerd[1594]: time="2025-11-08T00:19:01.502189436Z" level=info msg="StopPodSandbox for \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\"" Nov 8 00:19:01.570125 containerd[1594]: 2025-11-08 00:19:01.535 [WARNING][5309] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" WorkloadEndpoint="localhost-k8s-whisker--8488696cbf--5xxwb-eth0" Nov 8 00:19:01.570125 containerd[1594]: 2025-11-08 00:19:01.536 [INFO][5309] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Nov 8 00:19:01.570125 containerd[1594]: 2025-11-08 00:19:01.536 [INFO][5309] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" iface="eth0" netns="" Nov 8 00:19:01.570125 containerd[1594]: 2025-11-08 00:19:01.536 [INFO][5309] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Nov 8 00:19:01.570125 containerd[1594]: 2025-11-08 00:19:01.536 [INFO][5309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Nov 8 00:19:01.570125 containerd[1594]: 2025-11-08 00:19:01.557 [INFO][5318] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" HandleID="k8s-pod-network.11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Workload="localhost-k8s-whisker--8488696cbf--5xxwb-eth0" Nov 8 00:19:01.570125 containerd[1594]: 2025-11-08 00:19:01.557 [INFO][5318] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:19:01.570125 containerd[1594]: 2025-11-08 00:19:01.557 [INFO][5318] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:19:01.570125 containerd[1594]: 2025-11-08 00:19:01.562 [WARNING][5318] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" HandleID="k8s-pod-network.11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Workload="localhost-k8s-whisker--8488696cbf--5xxwb-eth0" Nov 8 00:19:01.570125 containerd[1594]: 2025-11-08 00:19:01.563 [INFO][5318] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" HandleID="k8s-pod-network.11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Workload="localhost-k8s-whisker--8488696cbf--5xxwb-eth0" Nov 8 00:19:01.570125 containerd[1594]: 2025-11-08 00:19:01.564 [INFO][5318] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:19:01.570125 containerd[1594]: 2025-11-08 00:19:01.567 [INFO][5309] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Nov 8 00:19:01.570575 containerd[1594]: time="2025-11-08T00:19:01.570159340Z" level=info msg="TearDown network for sandbox \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\" successfully" Nov 8 00:19:01.570575 containerd[1594]: time="2025-11-08T00:19:01.570184679Z" level=info msg="StopPodSandbox for \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\" returns successfully" Nov 8 00:19:01.570745 containerd[1594]: time="2025-11-08T00:19:01.570715676Z" level=info msg="RemovePodSandbox for \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\"" Nov 8 00:19:01.570783 containerd[1594]: time="2025-11-08T00:19:01.570755732Z" level=info msg="Forcibly stopping sandbox \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\"" Nov 8 00:19:01.643335 containerd[1594]: 2025-11-08 00:19:01.605 [WARNING][5336] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" WorkloadEndpoint="localhost-k8s-whisker--8488696cbf--5xxwb-eth0" Nov 8 00:19:01.643335 containerd[1594]: 2025-11-08 00:19:01.605 [INFO][5336] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Nov 8 00:19:01.643335 containerd[1594]: 2025-11-08 00:19:01.605 [INFO][5336] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" iface="eth0" netns="" Nov 8 00:19:01.643335 containerd[1594]: 2025-11-08 00:19:01.605 [INFO][5336] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Nov 8 00:19:01.643335 containerd[1594]: 2025-11-08 00:19:01.605 [INFO][5336] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Nov 8 00:19:01.643335 containerd[1594]: 2025-11-08 00:19:01.628 [INFO][5345] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" HandleID="k8s-pod-network.11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Workload="localhost-k8s-whisker--8488696cbf--5xxwb-eth0" Nov 8 00:19:01.643335 containerd[1594]: 2025-11-08 00:19:01.628 [INFO][5345] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:19:01.643335 containerd[1594]: 2025-11-08 00:19:01.628 [INFO][5345] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:19:01.643335 containerd[1594]: 2025-11-08 00:19:01.635 [WARNING][5345] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" HandleID="k8s-pod-network.11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Workload="localhost-k8s-whisker--8488696cbf--5xxwb-eth0" Nov 8 00:19:01.643335 containerd[1594]: 2025-11-08 00:19:01.635 [INFO][5345] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" HandleID="k8s-pod-network.11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Workload="localhost-k8s-whisker--8488696cbf--5xxwb-eth0" Nov 8 00:19:01.643335 containerd[1594]: 2025-11-08 00:19:01.636 [INFO][5345] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:19:01.643335 containerd[1594]: 2025-11-08 00:19:01.640 [INFO][5336] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532" Nov 8 00:19:01.643736 containerd[1594]: time="2025-11-08T00:19:01.643359866Z" level=info msg="TearDown network for sandbox \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\" successfully" Nov 8 00:19:01.650284 containerd[1594]: time="2025-11-08T00:19:01.650244983Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:19:01.650393 containerd[1594]: time="2025-11-08T00:19:01.650290130Z" level=info msg="RemovePodSandbox \"11a8f4e44ee8400eec2cd0ebbc9f46bbdf97634158447b8a58d43baea6edf532\" returns successfully" Nov 8 00:19:01.650715 containerd[1594]: time="2025-11-08T00:19:01.650669576Z" level=info msg="StopPodSandbox for \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\"" Nov 8 00:19:01.662044 containerd[1594]: time="2025-11-08T00:19:01.662005181Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:19:01.663118 containerd[1594]: time="2025-11-08T00:19:01.663082412Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:19:01.663168 containerd[1594]: time="2025-11-08T00:19:01.663119623Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:19:01.663370 kubelet[2667]: E1108 00:19:01.663326 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:19:01.663475 kubelet[2667]: E1108 00:19:01.663381 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:19:01.663658 kubelet[2667]: E1108 00:19:01.663586 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98lh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b4d75b794-d277s_calico-apiserver(4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:19:01.664242 containerd[1594]: time="2025-11-08T00:19:01.664204329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:19:01.665390 kubelet[2667]: E1108 00:19:01.665308 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b4d75b794-d277s" podUID="4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d" Nov 8 00:19:01.724231 containerd[1594]: 2025-11-08 00:19:01.689 [WARNING][5363] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5ec8164c-28b7-4eb6-afc2-8fbd6d62e774", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a", Pod:"coredns-668d6bf9bc-z9rcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie96ab54eabd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:19:01.724231 containerd[1594]: 2025-11-08 00:19:01.689 [INFO][5363] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Nov 8 00:19:01.724231 containerd[1594]: 2025-11-08 00:19:01.689 [INFO][5363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" iface="eth0" netns="" Nov 8 00:19:01.724231 containerd[1594]: 2025-11-08 00:19:01.689 [INFO][5363] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Nov 8 00:19:01.724231 containerd[1594]: 2025-11-08 00:19:01.689 [INFO][5363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Nov 8 00:19:01.724231 containerd[1594]: 2025-11-08 00:19:01.710 [INFO][5371] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" HandleID="k8s-pod-network.db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Workload="localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0" Nov 8 00:19:01.724231 containerd[1594]: 2025-11-08 00:19:01.710 [INFO][5371] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:19:01.724231 containerd[1594]: 2025-11-08 00:19:01.710 [INFO][5371] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:19:01.724231 containerd[1594]: 2025-11-08 00:19:01.717 [WARNING][5371] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" HandleID="k8s-pod-network.db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Workload="localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0" Nov 8 00:19:01.724231 containerd[1594]: 2025-11-08 00:19:01.717 [INFO][5371] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" HandleID="k8s-pod-network.db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Workload="localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0" Nov 8 00:19:01.724231 containerd[1594]: 2025-11-08 00:19:01.718 [INFO][5371] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:19:01.724231 containerd[1594]: 2025-11-08 00:19:01.721 [INFO][5363] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Nov 8 00:19:01.724704 containerd[1594]: time="2025-11-08T00:19:01.724285856Z" level=info msg="TearDown network for sandbox \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\" successfully" Nov 8 00:19:01.724704 containerd[1594]: time="2025-11-08T00:19:01.724324381Z" level=info msg="StopPodSandbox for \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\" returns successfully" Nov 8 00:19:01.724944 containerd[1594]: time="2025-11-08T00:19:01.724908168Z" level=info msg="RemovePodSandbox for \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\"" Nov 8 00:19:01.724978 containerd[1594]: time="2025-11-08T00:19:01.724949346Z" level=info msg="Forcibly stopping sandbox \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\"" Nov 8 00:19:01.798054 containerd[1594]: 2025-11-08 00:19:01.763 [WARNING][5389] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5ec8164c-28b7-4eb6-afc2-8fbd6d62e774", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4cd689a41303b5c2dff33055b0c419c0ba23f503ff017f6f1a18fbbf9827b7a", Pod:"coredns-668d6bf9bc-z9rcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie96ab54eabd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:19:01.798054 containerd[1594]: 2025-11-08 00:19:01.763 [INFO][5389] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Nov 8 00:19:01.798054 containerd[1594]: 2025-11-08 00:19:01.763 [INFO][5389] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" iface="eth0" netns="" Nov 8 00:19:01.798054 containerd[1594]: 2025-11-08 00:19:01.763 [INFO][5389] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Nov 8 00:19:01.798054 containerd[1594]: 2025-11-08 00:19:01.763 [INFO][5389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Nov 8 00:19:01.798054 containerd[1594]: 2025-11-08 00:19:01.786 [INFO][5398] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" HandleID="k8s-pod-network.db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Workload="localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0" Nov 8 00:19:01.798054 containerd[1594]: 2025-11-08 00:19:01.786 [INFO][5398] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:19:01.798054 containerd[1594]: 2025-11-08 00:19:01.786 [INFO][5398] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:19:01.798054 containerd[1594]: 2025-11-08 00:19:01.791 [WARNING][5398] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" HandleID="k8s-pod-network.db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Workload="localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0" Nov 8 00:19:01.798054 containerd[1594]: 2025-11-08 00:19:01.791 [INFO][5398] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" HandleID="k8s-pod-network.db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Workload="localhost-k8s-coredns--668d6bf9bc--z9rcx-eth0" Nov 8 00:19:01.798054 containerd[1594]: 2025-11-08 00:19:01.792 [INFO][5398] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:19:01.798054 containerd[1594]: 2025-11-08 00:19:01.795 [INFO][5389] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998" Nov 8 00:19:01.798054 containerd[1594]: time="2025-11-08T00:19:01.797986049Z" level=info msg="TearDown network for sandbox \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\" successfully" Nov 8 00:19:01.802222 containerd[1594]: time="2025-11-08T00:19:01.802172641Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:19:01.802369 containerd[1594]: time="2025-11-08T00:19:01.802244789Z" level=info msg="RemovePodSandbox \"db50cca9692cf6fd076e6a8854454c78ca809badf093e462a6ebf4c53401f998\" returns successfully" Nov 8 00:19:01.803034 containerd[1594]: time="2025-11-08T00:19:01.802914410Z" level=info msg="StopPodSandbox for \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\"" Nov 8 00:19:01.874429 containerd[1594]: 2025-11-08 00:19:01.838 [WARNING][5416] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s7fgw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c633ef5-243d-451b-9c89-0f760540ce13", ResourceVersion:"1255", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb", Pod:"csi-node-driver-s7fgw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87cbbe7da81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:19:01.874429 containerd[1594]: 2025-11-08 00:19:01.838 [INFO][5416] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Nov 8 00:19:01.874429 containerd[1594]: 2025-11-08 00:19:01.838 [INFO][5416] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" iface="eth0" netns="" Nov 8 00:19:01.874429 containerd[1594]: 2025-11-08 00:19:01.839 [INFO][5416] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Nov 8 00:19:01.874429 containerd[1594]: 2025-11-08 00:19:01.839 [INFO][5416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Nov 8 00:19:01.874429 containerd[1594]: 2025-11-08 00:19:01.861 [INFO][5427] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" HandleID="k8s-pod-network.db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Workload="localhost-k8s-csi--node--driver--s7fgw-eth0" Nov 8 00:19:01.874429 containerd[1594]: 2025-11-08 00:19:01.861 [INFO][5427] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:19:01.874429 containerd[1594]: 2025-11-08 00:19:01.861 [INFO][5427] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:19:01.874429 containerd[1594]: 2025-11-08 00:19:01.867 [WARNING][5427] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" HandleID="k8s-pod-network.db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Workload="localhost-k8s-csi--node--driver--s7fgw-eth0" Nov 8 00:19:01.874429 containerd[1594]: 2025-11-08 00:19:01.867 [INFO][5427] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" HandleID="k8s-pod-network.db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Workload="localhost-k8s-csi--node--driver--s7fgw-eth0" Nov 8 00:19:01.874429 containerd[1594]: 2025-11-08 00:19:01.868 [INFO][5427] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:19:01.874429 containerd[1594]: 2025-11-08 00:19:01.871 [INFO][5416] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Nov 8 00:19:01.874911 containerd[1594]: time="2025-11-08T00:19:01.874488373Z" level=info msg="TearDown network for sandbox \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\" successfully" Nov 8 00:19:01.874911 containerd[1594]: time="2025-11-08T00:19:01.874515244Z" level=info msg="StopPodSandbox for \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\" returns successfully" Nov 8 00:19:01.875127 containerd[1594]: time="2025-11-08T00:19:01.875091457Z" level=info msg="RemovePodSandbox for \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\"" Nov 8 00:19:01.875180 containerd[1594]: time="2025-11-08T00:19:01.875134219Z" level=info msg="Forcibly stopping sandbox \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\"" Nov 8 00:19:01.948050 containerd[1594]: 2025-11-08 00:19:01.909 [WARNING][5446] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s7fgw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c633ef5-243d-451b-9c89-0f760540ce13", ResourceVersion:"1255", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cb89b4ddd9328484e5af6235ce933ae9d2b8e12fc17640d33c1c062c9fb9f1eb", Pod:"csi-node-driver-s7fgw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87cbbe7da81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:19:01.948050 containerd[1594]: 2025-11-08 00:19:01.909 [INFO][5446] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Nov 8 00:19:01.948050 containerd[1594]: 2025-11-08 00:19:01.909 [INFO][5446] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" iface="eth0" netns="" Nov 8 00:19:01.948050 containerd[1594]: 2025-11-08 00:19:01.909 [INFO][5446] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Nov 8 00:19:01.948050 containerd[1594]: 2025-11-08 00:19:01.909 [INFO][5446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Nov 8 00:19:01.948050 containerd[1594]: 2025-11-08 00:19:01.934 [INFO][5455] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" HandleID="k8s-pod-network.db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Workload="localhost-k8s-csi--node--driver--s7fgw-eth0" Nov 8 00:19:01.948050 containerd[1594]: 2025-11-08 00:19:01.934 [INFO][5455] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:19:01.948050 containerd[1594]: 2025-11-08 00:19:01.934 [INFO][5455] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:19:01.948050 containerd[1594]: 2025-11-08 00:19:01.939 [WARNING][5455] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" HandleID="k8s-pod-network.db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Workload="localhost-k8s-csi--node--driver--s7fgw-eth0" Nov 8 00:19:01.948050 containerd[1594]: 2025-11-08 00:19:01.939 [INFO][5455] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" HandleID="k8s-pod-network.db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Workload="localhost-k8s-csi--node--driver--s7fgw-eth0" Nov 8 00:19:01.948050 containerd[1594]: 2025-11-08 00:19:01.941 [INFO][5455] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:19:01.948050 containerd[1594]: 2025-11-08 00:19:01.945 [INFO][5446] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795" Nov 8 00:19:01.948504 containerd[1594]: time="2025-11-08T00:19:01.948110565Z" level=info msg="TearDown network for sandbox \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\" successfully" Nov 8 00:19:01.952910 containerd[1594]: time="2025-11-08T00:19:01.952838113Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:19:01.953005 containerd[1594]: time="2025-11-08T00:19:01.952938455Z" level=info msg="RemovePodSandbox \"db3052826186b8befde9f649b0b42832733d9ad81610ff4d63c6b076ec836795\" returns successfully" Nov 8 00:19:01.953575 containerd[1594]: time="2025-11-08T00:19:01.953518575Z" level=info msg="StopPodSandbox for \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\"" Nov 8 00:19:01.988376 containerd[1594]: time="2025-11-08T00:19:01.988314610Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:19:01.989966 containerd[1594]: time="2025-11-08T00:19:01.989926565Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:19:01.990015 containerd[1594]: time="2025-11-08T00:19:01.989954829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:19:01.990797 containerd[1594]: time="2025-11-08T00:19:01.990691358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:19:01.990833 kubelet[2667]: E1108 00:19:01.990185 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:19:01.990833 kubelet[2667]: E1108 00:19:01.990248 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:19:01.990833 kubelet[2667]: E1108 00:19:01.990515 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmg9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5497d898d6-c7j84_calico-system(54174585-9397-4869-81c3-ea42889b85ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:19:01.992225 kubelet[2667]: E1108 00:19:01.992164 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5497d898d6-c7j84" podUID="54174585-9397-4869-81c3-ea42889b85ce" Nov 8 00:19:02.027994 containerd[1594]: 2025-11-08 00:19:01.988 [WARNING][5473] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0", GenerateName:"calico-apiserver-7b4d75b794-", Namespace:"calico-apiserver", SelfLink:"", UID:"4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d", ResourceVersion:"1257", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4d75b794", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21", Pod:"calico-apiserver-7b4d75b794-d277s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9a8e85911e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:19:02.027994 containerd[1594]: 2025-11-08 00:19:01.989 [INFO][5473] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Nov 8 00:19:02.027994 containerd[1594]: 2025-11-08 00:19:01.989 [INFO][5473] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" iface="eth0" netns="" Nov 8 00:19:02.027994 containerd[1594]: 2025-11-08 00:19:01.989 [INFO][5473] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Nov 8 00:19:02.027994 containerd[1594]: 2025-11-08 00:19:01.989 [INFO][5473] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Nov 8 00:19:02.027994 containerd[1594]: 2025-11-08 00:19:02.015 [INFO][5482] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" HandleID="k8s-pod-network.445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Workload="localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0" Nov 8 00:19:02.027994 containerd[1594]: 2025-11-08 00:19:02.015 [INFO][5482] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:19:02.027994 containerd[1594]: 2025-11-08 00:19:02.016 [INFO][5482] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:19:02.027994 containerd[1594]: 2025-11-08 00:19:02.020 [WARNING][5482] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" HandleID="k8s-pod-network.445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Workload="localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0" Nov 8 00:19:02.027994 containerd[1594]: 2025-11-08 00:19:02.020 [INFO][5482] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" HandleID="k8s-pod-network.445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Workload="localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0" Nov 8 00:19:02.027994 containerd[1594]: 2025-11-08 00:19:02.022 [INFO][5482] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:19:02.027994 containerd[1594]: 2025-11-08 00:19:02.025 [INFO][5473] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Nov 8 00:19:02.028444 containerd[1594]: time="2025-11-08T00:19:02.028050159Z" level=info msg="TearDown network for sandbox \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\" successfully" Nov 8 00:19:02.028444 containerd[1594]: time="2025-11-08T00:19:02.028086238Z" level=info msg="StopPodSandbox for \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\" returns successfully" Nov 8 00:19:02.028661 containerd[1594]: time="2025-11-08T00:19:02.028638364Z" level=info msg="RemovePodSandbox for \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\"" Nov 8 00:19:02.028700 containerd[1594]: time="2025-11-08T00:19:02.028665848Z" level=info msg="Forcibly stopping sandbox \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\"" Nov 8 00:19:02.098603 containerd[1594]: 2025-11-08 00:19:02.065 [WARNING][5500] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0", GenerateName:"calico-apiserver-7b4d75b794-", Namespace:"calico-apiserver", SelfLink:"", UID:"4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d", ResourceVersion:"1257", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4d75b794", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"700a68b6e665abbfe689816e61b1d98a7fa6ee9729fefe1b54986d24b4ccee21", Pod:"calico-apiserver-7b4d75b794-d277s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9a8e85911e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:19:02.098603 containerd[1594]: 2025-11-08 00:19:02.065 [INFO][5500] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Nov 8 00:19:02.098603 containerd[1594]: 2025-11-08 00:19:02.065 [INFO][5500] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" iface="eth0" netns="" Nov 8 00:19:02.098603 containerd[1594]: 2025-11-08 00:19:02.065 [INFO][5500] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Nov 8 00:19:02.098603 containerd[1594]: 2025-11-08 00:19:02.065 [INFO][5500] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Nov 8 00:19:02.098603 containerd[1594]: 2025-11-08 00:19:02.085 [INFO][5510] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" HandleID="k8s-pod-network.445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Workload="localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0" Nov 8 00:19:02.098603 containerd[1594]: 2025-11-08 00:19:02.085 [INFO][5510] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:19:02.098603 containerd[1594]: 2025-11-08 00:19:02.085 [INFO][5510] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:19:02.098603 containerd[1594]: 2025-11-08 00:19:02.091 [WARNING][5510] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" HandleID="k8s-pod-network.445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Workload="localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0" Nov 8 00:19:02.098603 containerd[1594]: 2025-11-08 00:19:02.091 [INFO][5510] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" HandleID="k8s-pod-network.445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Workload="localhost-k8s-calico--apiserver--7b4d75b794--d277s-eth0" Nov 8 00:19:02.098603 containerd[1594]: 2025-11-08 00:19:02.092 [INFO][5510] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:19:02.098603 containerd[1594]: 2025-11-08 00:19:02.095 [INFO][5500] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f" Nov 8 00:19:02.098603 containerd[1594]: time="2025-11-08T00:19:02.098573376Z" level=info msg="TearDown network for sandbox \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\" successfully" Nov 8 00:19:02.114601 containerd[1594]: time="2025-11-08T00:19:02.114542904Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:19:02.114601 containerd[1594]: time="2025-11-08T00:19:02.114603740Z" level=info msg="RemovePodSandbox \"445367a868b48d9642a754fe49a3f644f6d17c8578ed0570b81bffac070f891f\" returns successfully" Nov 8 00:19:02.115219 containerd[1594]: time="2025-11-08T00:19:02.115177528Z" level=info msg="StopPodSandbox for \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\"" Nov 8 00:19:02.186272 containerd[1594]: 2025-11-08 00:19:02.154 [WARNING][5528] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--9dm45-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ba4e3da5-1f7c-4476-a748-4d008501b030", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81", Pod:"goldmane-666569f655-9dm45", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali83ad33c7689", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:19:02.186272 containerd[1594]: 2025-11-08 00:19:02.154 [INFO][5528] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Nov 8 00:19:02.186272 containerd[1594]: 2025-11-08 00:19:02.154 [INFO][5528] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" iface="eth0" netns="" Nov 8 00:19:02.186272 containerd[1594]: 2025-11-08 00:19:02.154 [INFO][5528] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Nov 8 00:19:02.186272 containerd[1594]: 2025-11-08 00:19:02.154 [INFO][5528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Nov 8 00:19:02.186272 containerd[1594]: 2025-11-08 00:19:02.174 [INFO][5537] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" HandleID="k8s-pod-network.8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Workload="localhost-k8s-goldmane--666569f655--9dm45-eth0" Nov 8 00:19:02.186272 containerd[1594]: 2025-11-08 00:19:02.174 [INFO][5537] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:19:02.186272 containerd[1594]: 2025-11-08 00:19:02.174 [INFO][5537] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:19:02.186272 containerd[1594]: 2025-11-08 00:19:02.179 [WARNING][5537] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" HandleID="k8s-pod-network.8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Workload="localhost-k8s-goldmane--666569f655--9dm45-eth0" Nov 8 00:19:02.186272 containerd[1594]: 2025-11-08 00:19:02.179 [INFO][5537] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" HandleID="k8s-pod-network.8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Workload="localhost-k8s-goldmane--666569f655--9dm45-eth0" Nov 8 00:19:02.186272 containerd[1594]: 2025-11-08 00:19:02.180 [INFO][5537] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:19:02.186272 containerd[1594]: 2025-11-08 00:19:02.183 [INFO][5528] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Nov 8 00:19:02.186708 containerd[1594]: time="2025-11-08T00:19:02.186324258Z" level=info msg="TearDown network for sandbox \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\" successfully" Nov 8 00:19:02.186708 containerd[1594]: time="2025-11-08T00:19:02.186355087Z" level=info msg="StopPodSandbox for \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\" returns successfully" Nov 8 00:19:02.186876 containerd[1594]: time="2025-11-08T00:19:02.186812853Z" level=info msg="RemovePodSandbox for \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\"" Nov 8 00:19:02.186931 containerd[1594]: time="2025-11-08T00:19:02.186842609Z" level=info msg="Forcibly stopping sandbox \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\"" Nov 8 00:19:02.232089 systemd[1]: Started sshd@13-10.0.0.26:22-10.0.0.1:37614.service - OpenSSH per-connection server daemon (10.0.0.1:37614). Nov 8 00:19:02.263227 containerd[1594]: 2025-11-08 00:19:02.220 [WARNING][5554] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--9dm45-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ba4e3da5-1f7c-4476-a748-4d008501b030", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c63cb651bf6076e5d7a7889fe02f0e84a474e7493b7f88dca3b4c5c072c1d81", Pod:"goldmane-666569f655-9dm45", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali83ad33c7689", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:19:02.263227 containerd[1594]: 2025-11-08 00:19:02.220 [INFO][5554] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Nov 8 00:19:02.263227 containerd[1594]: 2025-11-08 00:19:02.220 [INFO][5554] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" iface="eth0" netns="" Nov 8 00:19:02.263227 containerd[1594]: 2025-11-08 00:19:02.221 [INFO][5554] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Nov 8 00:19:02.263227 containerd[1594]: 2025-11-08 00:19:02.221 [INFO][5554] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Nov 8 00:19:02.263227 containerd[1594]: 2025-11-08 00:19:02.248 [INFO][5563] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" HandleID="k8s-pod-network.8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Workload="localhost-k8s-goldmane--666569f655--9dm45-eth0" Nov 8 00:19:02.263227 containerd[1594]: 2025-11-08 00:19:02.248 [INFO][5563] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:19:02.263227 containerd[1594]: 2025-11-08 00:19:02.248 [INFO][5563] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:19:02.263227 containerd[1594]: 2025-11-08 00:19:02.256 [WARNING][5563] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" HandleID="k8s-pod-network.8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Workload="localhost-k8s-goldmane--666569f655--9dm45-eth0" Nov 8 00:19:02.263227 containerd[1594]: 2025-11-08 00:19:02.256 [INFO][5563] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" HandleID="k8s-pod-network.8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Workload="localhost-k8s-goldmane--666569f655--9dm45-eth0" Nov 8 00:19:02.263227 containerd[1594]: 2025-11-08 00:19:02.257 [INFO][5563] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:19:02.263227 containerd[1594]: 2025-11-08 00:19:02.260 [INFO][5554] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60" Nov 8 00:19:02.263648 containerd[1594]: time="2025-11-08T00:19:02.263277769Z" level=info msg="TearDown network for sandbox \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\" successfully" Nov 8 00:19:02.267667 containerd[1594]: time="2025-11-08T00:19:02.267629767Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:19:02.267746 containerd[1594]: time="2025-11-08T00:19:02.267700141Z" level=info msg="RemovePodSandbox \"8ea8f8030ddf727fc7ba021a612ccf9b66cb3ff5d3b5978c7579ef5601082d60\" returns successfully" Nov 8 00:19:02.268497 containerd[1594]: time="2025-11-08T00:19:02.268271655Z" level=info msg="StopPodSandbox for \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\"" Nov 8 00:19:02.280868 sshd[5568]: Accepted publickey for core from 10.0.0.1 port 37614 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:19:02.282780 sshd[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:02.288263 systemd-logind[1568]: New session 14 of user core. Nov 8 00:19:02.293238 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:19:02.330478 containerd[1594]: time="2025-11-08T00:19:02.330256264Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:19:02.331530 containerd[1594]: time="2025-11-08T00:19:02.331497458Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:19:02.331676 containerd[1594]: time="2025-11-08T00:19:02.331537976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:19:02.331910 kubelet[2667]: E1108 00:19:02.331839 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:19:02.331970 kubelet[2667]: E1108 00:19:02.331926 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:19:02.332104 kubelet[2667]: E1108 00:19:02.332055 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6hbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-s7fgw_calico-system(9c633ef5-243d-451b-9c89-0f760540ce13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:19:02.333296 kubelet[2667]: E1108 00:19:02.333239 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s7fgw" podUID="9c633ef5-243d-451b-9c89-0f760540ce13" Nov 8 00:19:02.342691 containerd[1594]: 2025-11-08 00:19:02.306 [WARNING][5584] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0", GenerateName:"calico-kube-controllers-5497d898d6-", Namespace:"calico-system", SelfLink:"", UID:"54174585-9397-4869-81c3-ea42889b85ce", ResourceVersion:"1252", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5497d898d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf", Pod:"calico-kube-controllers-5497d898d6-c7j84", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5924ccc3fbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:19:02.342691 containerd[1594]: 2025-11-08 00:19:02.306 [INFO][5584] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Nov 8 00:19:02.342691 containerd[1594]: 2025-11-08 00:19:02.306 [INFO][5584] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" iface="eth0" netns="" Nov 8 00:19:02.342691 containerd[1594]: 2025-11-08 00:19:02.306 [INFO][5584] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Nov 8 00:19:02.342691 containerd[1594]: 2025-11-08 00:19:02.306 [INFO][5584] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Nov 8 00:19:02.342691 containerd[1594]: 2025-11-08 00:19:02.329 [INFO][5595] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" HandleID="k8s-pod-network.a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Workload="localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0" Nov 8 00:19:02.342691 containerd[1594]: 2025-11-08 00:19:02.329 [INFO][5595] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:19:02.342691 containerd[1594]: 2025-11-08 00:19:02.329 [INFO][5595] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:19:02.342691 containerd[1594]: 2025-11-08 00:19:02.335 [WARNING][5595] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" HandleID="k8s-pod-network.a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Workload="localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0" Nov 8 00:19:02.342691 containerd[1594]: 2025-11-08 00:19:02.335 [INFO][5595] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" HandleID="k8s-pod-network.a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Workload="localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0" Nov 8 00:19:02.342691 containerd[1594]: 2025-11-08 00:19:02.336 [INFO][5595] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:19:02.342691 containerd[1594]: 2025-11-08 00:19:02.340 [INFO][5584] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Nov 8 00:19:02.343262 containerd[1594]: time="2025-11-08T00:19:02.342763456Z" level=info msg="TearDown network for sandbox \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\" successfully" Nov 8 00:19:02.343262 containerd[1594]: time="2025-11-08T00:19:02.342805386Z" level=info msg="StopPodSandbox for \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\" returns successfully" Nov 8 00:19:02.344040 containerd[1594]: time="2025-11-08T00:19:02.344008288Z" level=info msg="RemovePodSandbox for \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\"" Nov 8 00:19:02.344040 containerd[1594]: time="2025-11-08T00:19:02.344040419Z" level=info msg="Forcibly stopping sandbox \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\"" Nov 8 00:19:02.413771 containerd[1594]: 2025-11-08 00:19:02.381 [WARNING][5615] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0", GenerateName:"calico-kube-controllers-5497d898d6-", Namespace:"calico-system", SelfLink:"", UID:"54174585-9397-4869-81c3-ea42889b85ce", ResourceVersion:"1252", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5497d898d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"16cd270d0eb5a22bf83845f9cdfd6bcfd5c784dba4210f3c12d1e222dc2d70cf", Pod:"calico-kube-controllers-5497d898d6-c7j84", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5924ccc3fbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:19:02.413771 containerd[1594]: 2025-11-08 00:19:02.381 [INFO][5615] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Nov 8 00:19:02.413771 containerd[1594]: 2025-11-08 00:19:02.381 [INFO][5615] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" iface="eth0" netns="" Nov 8 00:19:02.413771 containerd[1594]: 2025-11-08 00:19:02.381 [INFO][5615] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Nov 8 00:19:02.413771 containerd[1594]: 2025-11-08 00:19:02.381 [INFO][5615] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Nov 8 00:19:02.413771 containerd[1594]: 2025-11-08 00:19:02.402 [INFO][5628] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" HandleID="k8s-pod-network.a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Workload="localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0" Nov 8 00:19:02.413771 containerd[1594]: 2025-11-08 00:19:02.402 [INFO][5628] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:19:02.413771 containerd[1594]: 2025-11-08 00:19:02.402 [INFO][5628] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:19:02.413771 containerd[1594]: 2025-11-08 00:19:02.407 [WARNING][5628] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" HandleID="k8s-pod-network.a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Workload="localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0" Nov 8 00:19:02.413771 containerd[1594]: 2025-11-08 00:19:02.407 [INFO][5628] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" HandleID="k8s-pod-network.a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Workload="localhost-k8s-calico--kube--controllers--5497d898d6--c7j84-eth0" Nov 8 00:19:02.413771 containerd[1594]: 2025-11-08 00:19:02.409 [INFO][5628] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:19:02.413771 containerd[1594]: 2025-11-08 00:19:02.411 [INFO][5615] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce" Nov 8 00:19:02.413771 containerd[1594]: time="2025-11-08T00:19:02.413721986Z" level=info msg="TearDown network for sandbox \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\" successfully" Nov 8 00:19:02.422621 containerd[1594]: time="2025-11-08T00:19:02.422474564Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:19:02.422621 containerd[1594]: time="2025-11-08T00:19:02.422526283Z" level=info msg="RemovePodSandbox \"a142d0098ff30af33a748e2b2799917ccc86e93a51772d9a562637cffc7f77ce\" returns successfully" Nov 8 00:19:02.424052 containerd[1594]: time="2025-11-08T00:19:02.423746538Z" level=info msg="StopPodSandbox for \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\"" Nov 8 00:19:02.435425 sshd[5568]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:02.438515 systemd-logind[1568]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:19:02.438909 systemd[1]: sshd@13-10.0.0.26:22-10.0.0.1:37614.service: Deactivated successfully. Nov 8 00:19:02.443123 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:19:02.444723 systemd-logind[1568]: Removed session 14. Nov 8 00:19:02.494295 containerd[1594]: 2025-11-08 00:19:02.460 [WARNING][5645] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0", GenerateName:"calico-apiserver-7b4d75b794-", Namespace:"calico-apiserver", SelfLink:"", UID:"0357a036-98a8-435c-9d85-9cc2bb4428b4", ResourceVersion:"1177", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4d75b794", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef", Pod:"calico-apiserver-7b4d75b794-6dvvd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90b09d3505d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:19:02.494295 containerd[1594]: 2025-11-08 00:19:02.460 [INFO][5645] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Nov 8 00:19:02.494295 containerd[1594]: 2025-11-08 00:19:02.460 [INFO][5645] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" iface="eth0" netns="" Nov 8 00:19:02.494295 containerd[1594]: 2025-11-08 00:19:02.460 [INFO][5645] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Nov 8 00:19:02.494295 containerd[1594]: 2025-11-08 00:19:02.460 [INFO][5645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Nov 8 00:19:02.494295 containerd[1594]: 2025-11-08 00:19:02.481 [INFO][5656] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" HandleID="k8s-pod-network.455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Workload="localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0" Nov 8 00:19:02.494295 containerd[1594]: 2025-11-08 00:19:02.481 [INFO][5656] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:19:02.494295 containerd[1594]: 2025-11-08 00:19:02.482 [INFO][5656] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:19:02.494295 containerd[1594]: 2025-11-08 00:19:02.487 [WARNING][5656] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" HandleID="k8s-pod-network.455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Workload="localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0" Nov 8 00:19:02.494295 containerd[1594]: 2025-11-08 00:19:02.487 [INFO][5656] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" HandleID="k8s-pod-network.455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Workload="localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0" Nov 8 00:19:02.494295 containerd[1594]: 2025-11-08 00:19:02.488 [INFO][5656] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:19:02.494295 containerd[1594]: 2025-11-08 00:19:02.491 [INFO][5645] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Nov 8 00:19:02.494752 containerd[1594]: time="2025-11-08T00:19:02.494334068Z" level=info msg="TearDown network for sandbox \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\" successfully" Nov 8 00:19:02.494752 containerd[1594]: time="2025-11-08T00:19:02.494359176Z" level=info msg="StopPodSandbox for \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\" returns successfully" Nov 8 00:19:02.494957 containerd[1594]: time="2025-11-08T00:19:02.494921382Z" level=info msg="RemovePodSandbox for \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\"" Nov 8 00:19:02.495002 containerd[1594]: time="2025-11-08T00:19:02.494966538Z" level=info msg="Forcibly stopping sandbox \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\"" Nov 8 00:19:02.561163 containerd[1594]: 2025-11-08 00:19:02.526 [WARNING][5674] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0", GenerateName:"calico-apiserver-7b4d75b794-", Namespace:"calico-apiserver", SelfLink:"", UID:"0357a036-98a8-435c-9d85-9cc2bb4428b4", ResourceVersion:"1177", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 18, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4d75b794", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e9474c1b8a52b017aba19926b4e6bf2f1f57e3241df28e8e6368318dfabb64ef", Pod:"calico-apiserver-7b4d75b794-6dvvd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90b09d3505d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:19:02.561163 containerd[1594]: 2025-11-08 00:19:02.527 [INFO][5674] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Nov 8 00:19:02.561163 containerd[1594]: 2025-11-08 00:19:02.527 [INFO][5674] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" iface="eth0" netns="" Nov 8 00:19:02.561163 containerd[1594]: 2025-11-08 00:19:02.527 [INFO][5674] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Nov 8 00:19:02.561163 containerd[1594]: 2025-11-08 00:19:02.527 [INFO][5674] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Nov 8 00:19:02.561163 containerd[1594]: 2025-11-08 00:19:02.548 [INFO][5682] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" HandleID="k8s-pod-network.455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Workload="localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0" Nov 8 00:19:02.561163 containerd[1594]: 2025-11-08 00:19:02.548 [INFO][5682] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:19:02.561163 containerd[1594]: 2025-11-08 00:19:02.548 [INFO][5682] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:19:02.561163 containerd[1594]: 2025-11-08 00:19:02.553 [WARNING][5682] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" HandleID="k8s-pod-network.455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Workload="localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0" Nov 8 00:19:02.561163 containerd[1594]: 2025-11-08 00:19:02.553 [INFO][5682] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" HandleID="k8s-pod-network.455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Workload="localhost-k8s-calico--apiserver--7b4d75b794--6dvvd-eth0" Nov 8 00:19:02.561163 containerd[1594]: 2025-11-08 00:19:02.555 [INFO][5682] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:19:02.561163 containerd[1594]: 2025-11-08 00:19:02.558 [INFO][5674] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88" Nov 8 00:19:02.561699 containerd[1594]: time="2025-11-08T00:19:02.561212109Z" level=info msg="TearDown network for sandbox \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\" successfully" Nov 8 00:19:02.565216 containerd[1594]: time="2025-11-08T00:19:02.565190322Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:19:02.565279 containerd[1594]: time="2025-11-08T00:19:02.565240739Z" level=info msg="RemovePodSandbox \"455800ad34a31ea3140028f6470db82f0a0f1922123ef8a0382188ef6237ad88\" returns successfully" Nov 8 00:19:07.445108 systemd[1]: Started sshd@14-10.0.0.26:22-10.0.0.1:39356.service - OpenSSH per-connection server daemon (10.0.0.1:39356). Nov 8 00:19:07.475670 sshd[5690]: Accepted publickey for core from 10.0.0.1 port 39356 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:19:07.477606 sshd[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:07.481961 systemd-logind[1568]: New session 15 of user core. Nov 8 00:19:07.492251 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:19:07.602890 sshd[5690]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:07.606827 systemd[1]: sshd@14-10.0.0.26:22-10.0.0.1:39356.service: Deactivated successfully. Nov 8 00:19:07.609470 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:19:07.610325 systemd-logind[1568]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:19:07.611706 systemd-logind[1568]: Removed session 15. Nov 8 00:19:08.365909 kubelet[2667]: E1108 00:19:08.365835 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f98c486d9-btd2s" podUID="bcacda57-dec5-4042-b890-adc5f9a1885e" Nov 8 00:19:09.656326 kubelet[2667]: E1108 00:19:09.656268 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:19:10.353101 kubelet[2667]: E1108 00:19:10.353048 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:19:12.354207 kubelet[2667]: E1108 00:19:12.354142 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9dm45" podUID="ba4e3da5-1f7c-4476-a748-4d008501b030" Nov 8 00:19:12.621389 systemd[1]: Started sshd@15-10.0.0.26:22-10.0.0.1:39358.service - OpenSSH per-connection server daemon (10.0.0.1:39358). Nov 8 00:19:12.660262 sshd[5730]: Accepted publickey for core from 10.0.0.1 port 39358 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:19:12.662743 sshd[5730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:12.668626 systemd-logind[1568]: New session 16 of user core. Nov 8 00:19:12.679380 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:19:12.901809 sshd[5730]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:12.907665 systemd[1]: sshd@15-10.0.0.26:22-10.0.0.1:39358.service: Deactivated successfully. Nov 8 00:19:12.909109 systemd-logind[1568]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:19:12.911668 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:19:12.912775 systemd-logind[1568]: Removed session 16. Nov 8 00:19:13.353745 kubelet[2667]: E1108 00:19:13.353688 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b4d75b794-6dvvd" podUID="0357a036-98a8-435c-9d85-9cc2bb4428b4" Nov 8 00:19:13.354807 kubelet[2667]: E1108 00:19:13.354670 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b4d75b794-d277s" podUID="4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d" Nov 8 00:19:14.356060 kubelet[2667]: E1108 00:19:14.355960 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s7fgw" podUID="9c633ef5-243d-451b-9c89-0f760540ce13" Nov 8 00:19:17.354504 kubelet[2667]: E1108 00:19:17.354448 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5497d898d6-c7j84" podUID="54174585-9397-4869-81c3-ea42889b85ce" Nov 8 00:19:17.919106 systemd[1]: Started sshd@16-10.0.0.26:22-10.0.0.1:47842.service - OpenSSH per-connection server daemon (10.0.0.1:47842). Nov 8 00:19:17.956302 sshd[5745]: Accepted publickey for core from 10.0.0.1 port 47842 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:19:17.957979 sshd[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:17.961989 systemd-logind[1568]: New session 17 of user core. Nov 8 00:19:17.970120 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:19:18.089988 sshd[5745]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:18.109481 systemd[1]: Started sshd@17-10.0.0.26:22-10.0.0.1:47856.service - OpenSSH per-connection server daemon (10.0.0.1:47856). Nov 8 00:19:18.110242 systemd[1]: sshd@16-10.0.0.26:22-10.0.0.1:47842.service: Deactivated successfully. Nov 8 00:19:18.121710 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:19:18.129009 systemd-logind[1568]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:19:18.135199 systemd-logind[1568]: Removed session 17. Nov 8 00:19:18.169110 sshd[5757]: Accepted publickey for core from 10.0.0.1 port 47856 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:19:18.170893 sshd[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:18.176998 systemd-logind[1568]: New session 18 of user core. Nov 8 00:19:18.182199 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:19:18.353863 kubelet[2667]: E1108 00:19:18.353757 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:19:18.530402 sshd[5757]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:18.538227 systemd[1]: Started sshd@18-10.0.0.26:22-10.0.0.1:47858.service - OpenSSH per-connection server daemon (10.0.0.1:47858). Nov 8 00:19:18.542412 systemd[1]: sshd@17-10.0.0.26:22-10.0.0.1:47856.service: Deactivated successfully. Nov 8 00:19:18.550058 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:19:18.553991 systemd-logind[1568]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:19:18.556536 systemd-logind[1568]: Removed session 18. Nov 8 00:19:18.574075 sshd[5771]: Accepted publickey for core from 10.0.0.1 port 47858 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:19:18.574709 sshd[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:18.580163 systemd-logind[1568]: New session 19 of user core. Nov 8 00:19:18.589310 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:19:19.086913 sshd[5771]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:19.096468 systemd[1]: Started sshd@19-10.0.0.26:22-10.0.0.1:47870.service - OpenSSH per-connection server daemon (10.0.0.1:47870). Nov 8 00:19:19.097343 systemd[1]: sshd@18-10.0.0.26:22-10.0.0.1:47858.service: Deactivated successfully. Nov 8 00:19:19.104407 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:19:19.105705 systemd-logind[1568]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:19:19.112197 systemd-logind[1568]: Removed session 19. Nov 8 00:19:19.140899 sshd[5790]: Accepted publickey for core from 10.0.0.1 port 47870 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:19:19.142675 sshd[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:19.149282 systemd-logind[1568]: New session 20 of user core. Nov 8 00:19:19.157198 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:19:19.400709 sshd[5790]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:19.416137 systemd[1]: Started sshd@20-10.0.0.26:22-10.0.0.1:47884.service - OpenSSH per-connection server daemon (10.0.0.1:47884). Nov 8 00:19:19.416686 systemd[1]: sshd@19-10.0.0.26:22-10.0.0.1:47870.service: Deactivated successfully. Nov 8 00:19:19.418818 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:19:19.419675 systemd-logind[1568]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:19:19.421099 systemd-logind[1568]: Removed session 20. Nov 8 00:19:19.451224 sshd[5805]: Accepted publickey for core from 10.0.0.1 port 47884 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:19:19.452978 sshd[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:19.457704 systemd-logind[1568]: New session 21 of user core. Nov 8 00:19:19.465131 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:19:19.574456 sshd[5805]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:19.581334 systemd[1]: sshd@20-10.0.0.26:22-10.0.0.1:47884.service: Deactivated successfully. Nov 8 00:19:19.584207 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:19:19.585216 systemd-logind[1568]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:19:19.586312 systemd-logind[1568]: Removed session 21. Nov 8 00:19:22.358516 containerd[1594]: time="2025-11-08T00:19:22.357822779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:19:22.758941 containerd[1594]: time="2025-11-08T00:19:22.758834399Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:19:22.797481 containerd[1594]: time="2025-11-08T00:19:22.797394925Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:19:22.797703 containerd[1594]: time="2025-11-08T00:19:22.797553015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:19:22.797810 kubelet[2667]: E1108 00:19:22.797758 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:19:22.798366 kubelet[2667]: E1108 00:19:22.797827 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:19:22.798366 kubelet[2667]: E1108 00:19:22.798003 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:27502ee819424dd68f8b3ed29bc94e26,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6k2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f98c486d9-btd2s_calico-system(bcacda57-dec5-4042-b890-adc5f9a1885e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:19:22.800076 containerd[1594]: time="2025-11-08T00:19:22.800045430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:19:23.227120 containerd[1594]: time="2025-11-08T00:19:23.227057579Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:19:23.313872 containerd[1594]: time="2025-11-08T00:19:23.313788810Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:19:23.314055 containerd[1594]: time="2025-11-08T00:19:23.313821021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:19:23.314165 kubelet[2667]: E1108 00:19:23.314107 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:19:23.314241 kubelet[2667]: E1108 00:19:23.314180 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:19:23.314365 kubelet[2667]: E1108 00:19:23.314308 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6k2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f98c486d9-btd2s_calico-system(bcacda57-dec5-4042-b890-adc5f9a1885e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:19:23.315546 kubelet[2667]: E1108 00:19:23.315489 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f98c486d9-btd2s" podUID="bcacda57-dec5-4042-b890-adc5f9a1885e" Nov 8 00:19:23.363766 containerd[1594]: time="2025-11-08T00:19:23.363694733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:19:23.703592 containerd[1594]: time="2025-11-08T00:19:23.703401438Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:19:23.705609 containerd[1594]: time="2025-11-08T00:19:23.705558320Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:19:23.705729 containerd[1594]: time="2025-11-08T00:19:23.705624695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:19:23.705772 kubelet[2667]: E1108 00:19:23.705685 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:19:23.705772 kubelet[2667]: E1108 00:19:23.705724 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:19:23.705945 kubelet[2667]: E1108 00:19:23.705873 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qcvzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9dm45_calico-system(ba4e3da5-1f7c-4476-a748-4d008501b030): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:19:23.707518 kubelet[2667]: E1108 00:19:23.707479 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9dm45" podUID="ba4e3da5-1f7c-4476-a748-4d008501b030" Nov 8 00:19:24.582306 systemd[1]: Started sshd@21-10.0.0.26:22-10.0.0.1:46600.service - OpenSSH per-connection server daemon (10.0.0.1:46600). Nov 8 00:19:24.619203 sshd[5830]: Accepted publickey for core from 10.0.0.1 port 46600 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:19:24.624153 sshd[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:24.640920 systemd-logind[1568]: New session 22 of user core. Nov 8 00:19:24.643156 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:19:24.786903 sshd[5830]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:24.791294 systemd[1]: sshd@21-10.0.0.26:22-10.0.0.1:46600.service: Deactivated successfully. Nov 8 00:19:24.794521 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:19:24.795633 systemd-logind[1568]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:19:24.796581 systemd-logind[1568]: Removed session 22. Nov 8 00:19:26.355180 containerd[1594]: time="2025-11-08T00:19:26.355105421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:19:26.742794 containerd[1594]: time="2025-11-08T00:19:26.742743650Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:19:26.839816 containerd[1594]: time="2025-11-08T00:19:26.839737878Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:19:26.839992 containerd[1594]: time="2025-11-08T00:19:26.839792071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:19:26.840128 kubelet[2667]: E1108 00:19:26.840071 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:19:26.840544 kubelet[2667]: E1108 00:19:26.840134 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:19:26.840544 kubelet[2667]: E1108 00:19:26.840270 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vkbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b4d75b794-6dvvd_calico-apiserver(0357a036-98a8-435c-9d85-9cc2bb4428b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:19:26.841555 kubelet[2667]: E1108 00:19:26.841489 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b4d75b794-6dvvd" podUID="0357a036-98a8-435c-9d85-9cc2bb4428b4" Nov 8 00:19:27.355052 containerd[1594]: time="2025-11-08T00:19:27.354618834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:19:27.873667 containerd[1594]: time="2025-11-08T00:19:27.873592913Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:19:27.875072 containerd[1594]: time="2025-11-08T00:19:27.875010678Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:19:27.875168 containerd[1594]: time="2025-11-08T00:19:27.875051446Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:19:27.875343 kubelet[2667]: E1108 00:19:27.875272 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:19:27.875343 kubelet[2667]: E1108 00:19:27.875351 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:19:27.875882 kubelet[2667]: E1108 00:19:27.875476 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98lh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b4d75b794-d277s_calico-apiserver(4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:19:27.876952 kubelet[2667]: E1108 00:19:27.876913 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b4d75b794-d277s" podUID="4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d" Nov 8 00:19:28.354746 containerd[1594]: time="2025-11-08T00:19:28.354700761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:19:28.736063 containerd[1594]: time="2025-11-08T00:19:28.736017919Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:19:28.737290 containerd[1594]: time="2025-11-08T00:19:28.737249412Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:19:28.737373 containerd[1594]: time="2025-11-08T00:19:28.737310497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:19:28.737459 kubelet[2667]: E1108 00:19:28.737411 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:19:28.737568 kubelet[2667]: E1108 00:19:28.737474 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:19:28.737674 kubelet[2667]: E1108 00:19:28.737630 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6hbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-s7fgw_calico-system(9c633ef5-243d-451b-9c89-0f760540ce13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:19:28.739950 containerd[1594]: time="2025-11-08T00:19:28.739692660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:19:29.091992 containerd[1594]: time="2025-11-08T00:19:29.089001233Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:19:29.092493 containerd[1594]: time="2025-11-08T00:19:29.092375476Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:19:29.092493 containerd[1594]: time="2025-11-08T00:19:29.092459966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:19:29.092795 kubelet[2667]: E1108 00:19:29.092711 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:19:29.093122 kubelet[2667]: E1108 00:19:29.092806 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:19:29.093122 kubelet[2667]: E1108 00:19:29.092978 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6hbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-s7fgw_calico-system(9c633ef5-243d-451b-9c89-0f760540ce13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:19:29.095006 kubelet[2667]: E1108 00:19:29.094965 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s7fgw" podUID="9c633ef5-243d-451b-9c89-0f760540ce13" Nov 8 00:19:29.353834 kubelet[2667]: E1108 00:19:29.353364 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:19:29.355342 containerd[1594]: time="2025-11-08T00:19:29.355047143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:19:29.689878 containerd[1594]: time="2025-11-08T00:19:29.689642040Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:19:29.717120 containerd[1594]: time="2025-11-08T00:19:29.717046141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:19:29.717283 containerd[1594]: time="2025-11-08T00:19:29.717154236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:19:29.719130 kubelet[2667]: E1108 00:19:29.717471 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:19:29.719130 kubelet[2667]: E1108 00:19:29.717542 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:19:29.719130 kubelet[2667]: E1108 00:19:29.717675 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmg9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5497d898d6-c7j84_calico-system(54174585-9397-4869-81c3-ea42889b85ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:19:29.719130 kubelet[2667]: E1108 00:19:29.718977 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5497d898d6-c7j84" podUID="54174585-9397-4869-81c3-ea42889b85ce" Nov 8 00:19:29.798090 systemd[1]: Started sshd@22-10.0.0.26:22-10.0.0.1:46614.service - OpenSSH per-connection server daemon (10.0.0.1:46614). Nov 8 00:19:29.834012 sshd[5849]: Accepted publickey for core from 10.0.0.1 port 46614 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:19:29.835898 sshd[5849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:29.841968 systemd-logind[1568]: New session 23 of user core. Nov 8 00:19:29.850212 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:19:29.992446 sshd[5849]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:29.997541 systemd[1]: sshd@22-10.0.0.26:22-10.0.0.1:46614.service: Deactivated successfully. Nov 8 00:19:30.001263 systemd-logind[1568]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:19:30.001915 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:19:30.003620 systemd-logind[1568]: Removed session 23. Nov 8 00:19:30.353124 kubelet[2667]: E1108 00:19:30.352963 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:19:35.002086 systemd[1]: Started sshd@23-10.0.0.26:22-10.0.0.1:35690.service - OpenSSH per-connection server daemon (10.0.0.1:35690). Nov 8 00:19:35.036469 sshd[5864]: Accepted publickey for core from 10.0.0.1 port 35690 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:19:35.038503 sshd[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:35.042984 systemd-logind[1568]: New session 24 of user core. Nov 8 00:19:35.054123 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:19:35.181068 sshd[5864]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:35.187056 systemd[1]: sshd@23-10.0.0.26:22-10.0.0.1:35690.service: Deactivated successfully. Nov 8 00:19:35.190527 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:19:35.191634 systemd-logind[1568]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:19:35.193651 systemd-logind[1568]: Removed session 24. Nov 8 00:19:36.354388 kubelet[2667]: E1108 00:19:36.354303 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9dm45" podUID="ba4e3da5-1f7c-4476-a748-4d008501b030" Nov 8 00:19:37.362346 kubelet[2667]: E1108 00:19:37.361686 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f98c486d9-btd2s" podUID="bcacda57-dec5-4042-b890-adc5f9a1885e" Nov 8 00:19:38.354353 kubelet[2667]: E1108 00:19:38.354286 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b4d75b794-6dvvd" podUID="0357a036-98a8-435c-9d85-9cc2bb4428b4" Nov 8 00:19:40.191281 systemd[1]: Started sshd@24-10.0.0.26:22-10.0.0.1:35702.service - OpenSSH per-connection server daemon (10.0.0.1:35702). Nov 8 00:19:40.229994 sshd[5905]: Accepted publickey for core from 10.0.0.1 port 35702 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:19:40.232021 sshd[5905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:40.236820 systemd-logind[1568]: New session 25 of user core. Nov 8 00:19:40.244436 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:19:40.354665 kubelet[2667]: E1108 00:19:40.354596 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s7fgw" podUID="9c633ef5-243d-451b-9c89-0f760540ce13" Nov 8 00:19:40.372393 sshd[5905]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:40.379525 systemd[1]: sshd@24-10.0.0.26:22-10.0.0.1:35702.service: Deactivated successfully. Nov 8 00:19:40.384099 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:19:40.384270 systemd-logind[1568]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:19:40.386429 systemd-logind[1568]: Removed session 25. Nov 8 00:19:41.354809 kubelet[2667]: E1108 00:19:41.354717 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b4d75b794-d277s" podUID="4ef4f6a0-e5de-4cc3-969d-3e49cdd9607d"