Nov 1 00:34:33.925017 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:34:33.925038 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:34:33.925048 kernel: BIOS-provided physical RAM map: Nov 1 00:34:33.925055 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 00:34:33.925061 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 1 00:34:33.925067 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 1 00:34:33.925074 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 1 00:34:33.925080 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 1 00:34:33.925086 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Nov 1 00:34:33.925092 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Nov 1 00:34:33.925101 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Nov 1 00:34:33.925107 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Nov 1 00:34:33.925113 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Nov 1 00:34:33.925119 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Nov 1 00:34:33.925126 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Nov 1 00:34:33.925133 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 1 00:34:33.925142 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Nov 1 00:34:33.925148 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Nov 1 00:34:33.925155 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 1 00:34:33.925161 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 00:34:33.925168 kernel: NX (Execute Disable) protection: active Nov 1 00:34:33.925174 kernel: APIC: Static calls initialized Nov 1 00:34:33.925181 kernel: efi: EFI v2.7 by EDK II Nov 1 00:34:33.925187 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Nov 1 00:34:33.925194 kernel: SMBIOS 2.8 present. Nov 1 00:34:33.925201 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Nov 1 00:34:33.925207 kernel: Hypervisor detected: KVM Nov 1 00:34:33.925216 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:34:33.925223 kernel: kvm-clock: using sched offset of 4416623923 cycles Nov 1 00:34:33.925230 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:34:33.925237 kernel: tsc: Detected 2794.750 MHz processor Nov 1 00:34:33.925244 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:34:33.925251 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:34:33.925258 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Nov 1 00:34:33.925265 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 1 00:34:33.925272 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:34:33.925280 kernel: Using GB pages for direct mapping Nov 1 00:34:33.925287 kernel: Secure boot disabled Nov 1 00:34:33.925294 kernel: ACPI: Early table checksum verification disabled Nov 1 00:34:33.925301 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 1 00:34:33.925313 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 1 00:34:33.925320 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:34:33.925327 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:34:33.925336 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 1 00:34:33.925343 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:34:33.925351 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:34:33.925358 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:34:33.925365 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:34:33.925372 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 1 00:34:33.925379 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 1 00:34:33.925388 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Nov 1 00:34:33.925395 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 1 00:34:33.925402 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 1 00:34:33.925409 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 1 00:34:33.925416 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 1 00:34:33.925423 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 1 00:34:33.925430 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 1 00:34:33.925437 kernel: No NUMA configuration found Nov 1 00:34:33.925444 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Nov 1 00:34:33.925453 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Nov 1 00:34:33.925460 kernel: Zone ranges: Nov 1 00:34:33.925467 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:34:33.925474 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Nov 1 00:34:33.925481 kernel: Normal empty Nov 1 00:34:33.925488 kernel: Movable zone start for each node Nov 1 00:34:33.925495 kernel: Early memory node ranges Nov 1 00:34:33.925502 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 1 00:34:33.925509 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 1 00:34:33.925516 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 1 00:34:33.925526 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Nov 1 00:34:33.925533 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Nov 1 00:34:33.925539 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Nov 1 00:34:33.925546 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Nov 1 00:34:33.925553 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:34:33.925560 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 1 00:34:33.925567 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 1 00:34:33.925574 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:34:33.925581 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Nov 1 00:34:33.925591 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 1 00:34:33.925609 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Nov 1 00:34:33.925616 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:34:33.925623 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:34:33.925630 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:34:33.925637 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:34:33.925644 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:34:33.925651 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:34:33.925658 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:34:33.925668 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:34:33.925675 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:34:33.925682 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:34:33.925689 kernel: TSC deadline timer available Nov 1 00:34:33.925696 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 1 00:34:33.925703 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 00:34:33.925710 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 1 00:34:33.925724 kernel: kvm-guest: setup PV sched yield Nov 1 00:34:33.925731 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 1 00:34:33.925738 kernel: Booting paravirtualized kernel on KVM Nov 1 00:34:33.925747 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:34:33.925754 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 1 00:34:33.925761 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Nov 1 00:34:33.925768 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Nov 1 00:34:33.925775 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 1 00:34:33.925782 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:34:33.925789 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:34:33.925797 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:34:33.925807 kernel: random: crng init done Nov 1 00:34:33.925814 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:34:33.925821 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:34:33.925828 kernel: Fallback order for Node 0: 0 Nov 1 00:34:33.925835 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Nov 1 00:34:33.925842 kernel: Policy zone: DMA32 Nov 1 00:34:33.925849 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:34:33.925856 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 166140K reserved, 0K cma-reserved) Nov 1 00:34:33.925864 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 1 00:34:33.925873 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:34:33.925880 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:34:33.925887 kernel: Dynamic Preempt: voluntary Nov 1 00:34:33.925894 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:34:33.925909 kernel: rcu: RCU event tracing is enabled. Nov 1 00:34:33.925919 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 1 00:34:33.925926 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:34:33.925933 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:34:33.925941 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:34:33.925948 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:34:33.925955 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 1 00:34:33.925965 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 1 00:34:33.925972 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:34:33.925980 kernel: Console: colour dummy device 80x25 Nov 1 00:34:33.925987 kernel: printk: console [ttyS0] enabled Nov 1 00:34:33.926004 kernel: ACPI: Core revision 20230628 Nov 1 00:34:33.926012 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:34:33.926023 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:34:33.926030 kernel: x2apic enabled Nov 1 00:34:33.926037 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:34:33.926045 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 1 00:34:33.926052 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 1 00:34:33.926059 kernel: kvm-guest: setup PV IPIs Nov 1 00:34:33.926067 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:34:33.926074 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 1 00:34:33.926081 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Nov 1 00:34:33.926091 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 00:34:33.926098 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 00:34:33.926106 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 00:34:33.926113 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:34:33.926120 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:34:33.926128 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:34:33.926136 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 1 00:34:33.926143 kernel: active return thunk: retbleed_return_thunk Nov 1 00:34:33.926150 kernel: RETBleed: Mitigation: untrained return thunk Nov 1 00:34:33.926160 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:34:33.926167 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:34:33.926175 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 1 00:34:33.926183 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 1 00:34:33.926190 kernel: active return thunk: srso_return_thunk Nov 1 00:34:33.926198 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 1 00:34:33.926205 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:34:33.926212 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:34:33.926222 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:34:33.926229 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:34:33.926237 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 1 00:34:33.926244 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:34:33.926251 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:34:33.926259 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:34:33.926266 kernel: landlock: Up and running. Nov 1 00:34:33.926273 kernel: SELinux: Initializing. Nov 1 00:34:33.926281 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:34:33.926290 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:34:33.926298 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 1 00:34:33.926305 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:34:33.926315 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:34:33.926323 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:34:33.926331 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 00:34:33.926340 kernel: ... version: 0 Nov 1 00:34:33.926347 kernel: ... bit width: 48 Nov 1 00:34:33.926354 kernel: ... generic registers: 6 Nov 1 00:34:33.926364 kernel: ... value mask: 0000ffffffffffff Nov 1 00:34:33.926371 kernel: ... max period: 00007fffffffffff Nov 1 00:34:33.926378 kernel: ... fixed-purpose events: 0 Nov 1 00:34:33.926385 kernel: ... event mask: 000000000000003f Nov 1 00:34:33.926392 kernel: signal: max sigframe size: 1776 Nov 1 00:34:33.926400 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:34:33.926407 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:34:33.926414 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:34:33.926422 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:34:33.926431 kernel: .... node #0, CPUs: #1 #2 #3 Nov 1 00:34:33.926438 kernel: smp: Brought up 1 node, 4 CPUs Nov 1 00:34:33.926446 kernel: smpboot: Max logical packages: 1 Nov 1 00:34:33.926453 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Nov 1 00:34:33.926460 kernel: devtmpfs: initialized Nov 1 00:34:33.926467 kernel: x86/mm: Memory block size: 128MB Nov 1 00:34:33.926475 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 1 00:34:33.926482 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 1 00:34:33.926490 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Nov 1 00:34:33.926499 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 1 00:34:33.926507 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 1 00:34:33.926514 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:34:33.926521 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 1 00:34:33.926529 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:34:33.926536 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:34:33.926543 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:34:33.926551 kernel: audit: type=2000 audit(1761957272.639:1): state=initialized audit_enabled=0 res=1 Nov 1 00:34:33.926558 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:34:33.926567 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:34:33.926575 kernel: cpuidle: using governor menu Nov 1 00:34:33.926582 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:34:33.926589 kernel: dca service started, version 1.12.1 Nov 1 00:34:33.926607 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 00:34:33.926615 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 1 00:34:33.926622 kernel: PCI: Using configuration type 1 for base access Nov 1 00:34:33.926629 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:34:33.926637 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:34:33.926647 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:34:33.926654 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:34:33.926661 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:34:33.926668 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:34:33.926676 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:34:33.926683 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:34:33.926690 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:34:33.926698 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:34:33.926705 kernel: ACPI: Interpreter enabled Nov 1 00:34:33.926722 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 00:34:33.926729 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:34:33.926737 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:34:33.926744 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 00:34:33.926751 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 00:34:33.926759 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:34:33.926948 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:34:33.927182 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 00:34:33.927309 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 00:34:33.927319 kernel: PCI host bridge to bus 0000:00 Nov 1 00:34:33.927448 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:34:33.927561 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:34:33.927688 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:34:33.927808 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 1 00:34:33.927918 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 00:34:33.928033 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Nov 1 00:34:33.928141 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:34:33.928275 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 00:34:33.928479 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 1 00:34:33.928668 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Nov 1 00:34:33.928841 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Nov 1 00:34:33.928973 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 1 00:34:33.929093 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Nov 1 00:34:33.929214 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:34:33.929343 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 1 00:34:33.929464 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Nov 1 00:34:33.929585 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Nov 1 00:34:33.929732 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Nov 1 00:34:33.929871 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:34:33.930032 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Nov 1 00:34:33.930185 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Nov 1 00:34:33.930307 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Nov 1 00:34:33.930435 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:34:33.930556 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Nov 1 00:34:33.930820 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Nov 1 00:34:33.930976 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Nov 1 00:34:33.931097 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Nov 1 00:34:33.931223 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 00:34:33.931341 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 00:34:33.931467 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 00:34:33.931585 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Nov 1 00:34:33.931733 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Nov 1 00:34:33.931862 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 00:34:33.931981 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Nov 1 00:34:33.931991 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:34:33.931998 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:34:33.932006 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:34:33.932014 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:34:33.932021 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 00:34:33.932032 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 00:34:33.932039 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 00:34:33.932047 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 00:34:33.932054 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 00:34:33.932061 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 00:34:33.932069 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 00:34:33.932076 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 00:34:33.932083 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 00:34:33.932091 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 00:34:33.932100 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 00:34:33.932108 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 00:34:33.932115 kernel: iommu: Default domain type: Translated Nov 1 00:34:33.932122 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:34:33.932130 kernel: efivars: Registered efivars operations Nov 1 00:34:33.932137 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:34:33.932144 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:34:33.932152 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 1 00:34:33.932159 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Nov 1 00:34:33.932168 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Nov 1 00:34:33.932176 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Nov 1 00:34:33.932299 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 00:34:33.932417 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 00:34:33.932540 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:34:33.932550 kernel: vgaarb: loaded Nov 1 00:34:33.932558 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:34:33.932565 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:34:33.932573 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:34:33.932584 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:34:33.932591 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:34:33.932646 kernel: pnp: PnP ACPI init Nov 1 00:34:33.932789 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 00:34:33.932801 kernel: pnp: PnP ACPI: found 6 devices Nov 1 00:34:33.932809 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:34:33.932816 kernel: NET: Registered PF_INET protocol family Nov 1 00:34:33.932824 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:34:33.932835 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:34:33.932842 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:34:33.932850 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:34:33.932857 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 1 00:34:33.932865 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:34:33.932872 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:34:33.932880 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:34:33.932887 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:34:33.932895 kernel: NET: Registered PF_XDP protocol family Nov 1 00:34:33.933017 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Nov 1 00:34:33.933137 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Nov 1 00:34:33.933246 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:34:33.933353 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:34:33.933460 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:34:33.933568 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 1 00:34:33.933691 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 00:34:33.933809 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Nov 1 00:34:33.933822 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:34:33.933830 kernel: Initialise system trusted keyrings Nov 1 00:34:33.933838 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:34:33.933845 kernel: Key type asymmetric registered Nov 1 00:34:33.933853 kernel: Asymmetric key parser 'x509' registered Nov 1 00:34:33.933860 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:34:33.933867 kernel: io scheduler mq-deadline registered Nov 1 00:34:33.933875 kernel: io scheduler kyber registered Nov 1 00:34:33.933882 kernel: io scheduler bfq registered Nov 1 00:34:33.933892 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:34:33.933900 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 00:34:33.933907 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 00:34:33.933915 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 00:34:33.933922 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:34:33.933930 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:34:33.933937 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:34:33.933945 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:34:33.933952 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:34:33.934080 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 1 00:34:33.934090 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:34:33.934200 kernel: rtc_cmos 00:04: registered as rtc0 Nov 1 00:34:33.934311 kernel: rtc_cmos 00:04: setting system clock to 2025-11-01T00:34:33 UTC (1761957273) Nov 1 00:34:33.934422 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 1 00:34:33.934432 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 1 00:34:33.934439 kernel: efifb: probing for efifb Nov 1 00:34:33.934450 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Nov 1 00:34:33.934458 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Nov 1 00:34:33.934465 kernel: efifb: scrolling: redraw Nov 1 00:34:33.934472 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Nov 1 00:34:33.934480 kernel: Console: switching to colour frame buffer device 100x37 Nov 1 00:34:33.934487 kernel: fb0: EFI VGA frame buffer device Nov 1 00:34:33.934512 kernel: pstore: Using crash dump compression: deflate Nov 1 00:34:33.934522 kernel: pstore: Registered efi_pstore as persistent store backend Nov 1 00:34:33.934529 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:34:33.934539 kernel: Segment Routing with IPv6 Nov 1 00:34:33.934546 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:34:33.934554 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:34:33.934562 kernel: Key type dns_resolver registered Nov 1 00:34:33.934569 kernel: IPI shorthand broadcast: enabled Nov 1 00:34:33.934577 kernel: sched_clock: Marking stable (747003021, 201621621)->(1000356649, -51732007) Nov 1 00:34:33.934584 kernel: registered taskstats version 1 Nov 1 00:34:33.934604 kernel: Loading compiled-in X.509 certificates Nov 1 00:34:33.934620 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:34:33.934629 kernel: Key type .fscrypt registered Nov 1 00:34:33.934639 kernel: Key type fscrypt-provisioning registered Nov 1 00:34:33.934647 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:34:33.934654 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:34:33.934662 kernel: ima: No architecture policies found Nov 1 00:34:33.934670 kernel: clk: Disabling unused clocks Nov 1 00:34:33.934678 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:34:33.934686 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:34:33.934693 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:34:33.934701 kernel: Run /init as init process Nov 1 00:34:33.934711 kernel: with arguments: Nov 1 00:34:33.934726 kernel: /init Nov 1 00:34:33.934733 kernel: with environment: Nov 1 00:34:33.934741 kernel: HOME=/ Nov 1 00:34:33.934748 kernel: TERM=linux Nov 1 00:34:33.934758 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:34:33.934768 systemd[1]: Detected virtualization kvm. Nov 1 00:34:33.934778 systemd[1]: Detected architecture x86-64. Nov 1 00:34:33.934787 systemd[1]: Running in initrd. Nov 1 00:34:33.934797 systemd[1]: No hostname configured, using default hostname. Nov 1 00:34:33.934805 systemd[1]: Hostname set to . Nov 1 00:34:33.934813 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:34:33.934824 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:34:33.934832 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:34:33.934842 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:34:33.934853 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:34:33.934862 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:34:33.934870 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:34:33.934879 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:34:33.934891 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:34:33.934899 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:34:33.934907 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:34:33.934916 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:34:33.934924 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:34:33.934932 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:34:33.934940 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:34:33.934948 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:34:33.934958 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:34:33.934966 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:34:33.934976 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:34:33.934984 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:34:33.934992 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:34:33.935000 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:34:33.935008 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:34:33.935016 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:34:33.935024 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:34:33.935035 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:34:33.935043 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:34:33.935051 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:34:33.935059 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:34:33.935068 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:34:33.935076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:34:33.935084 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:34:33.935092 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:34:33.935120 systemd-journald[193]: Collecting audit messages is disabled. Nov 1 00:34:33.935138 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:34:33.935150 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:34:33.935158 systemd-journald[193]: Journal started Nov 1 00:34:33.935176 systemd-journald[193]: Runtime Journal (/run/log/journal/9d9044a8e19e4540ae5552868f213118) is 6.0M, max 48.3M, 42.2M free. Nov 1 00:34:33.928856 systemd-modules-load[194]: Inserted module 'overlay' Nov 1 00:34:33.941057 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:34:33.942646 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:34:33.947521 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:34:33.959621 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:34:33.961775 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:34:33.971200 kernel: Bridge firewalling registered Nov 1 00:34:33.961834 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 1 00:34:33.963038 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:34:33.965741 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:34:33.968323 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:34:33.971189 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:34:33.987141 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:34:33.991468 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:34:33.995663 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:34:33.999840 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:34:34.013763 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:34:34.018103 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:34:34.027494 dracut-cmdline[228]: dracut-dracut-053 Nov 1 00:34:34.030452 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:34:34.059204 systemd-resolved[231]: Positive Trust Anchors: Nov 1 00:34:34.059218 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:34:34.059249 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:34:34.074896 systemd-resolved[231]: Defaulting to hostname 'linux'. Nov 1 00:34:34.077255 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:34:34.078197 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:34:34.119649 kernel: SCSI subsystem initialized Nov 1 00:34:34.128621 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:34:34.139624 kernel: iscsi: registered transport (tcp) Nov 1 00:34:34.160829 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:34:34.160907 kernel: QLogic iSCSI HBA Driver Nov 1 00:34:34.209696 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:34:34.215851 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:34:34.242133 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:34:34.242166 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:34:34.243787 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:34:34.284623 kernel: raid6: avx2x4 gen() 30525 MB/s Nov 1 00:34:34.301617 kernel: raid6: avx2x2 gen() 31614 MB/s Nov 1 00:34:34.319347 kernel: raid6: avx2x1 gen() 26132 MB/s Nov 1 00:34:34.319366 kernel: raid6: using algorithm avx2x2 gen() 31614 MB/s Nov 1 00:34:34.337353 kernel: raid6: .... xor() 19905 MB/s, rmw enabled Nov 1 00:34:34.337383 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:34:34.357620 kernel: xor: automatically using best checksumming function avx Nov 1 00:34:34.512618 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:34:34.525981 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:34:34.544767 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:34:34.560448 systemd-udevd[415]: Using default interface naming scheme 'v255'. Nov 1 00:34:34.565164 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:34:34.580755 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:34:34.598020 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Nov 1 00:34:34.634476 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:34:34.647841 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:34:34.718132 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:34:34.732730 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:34:34.745380 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:34:34.750540 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:34:34.754971 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:34:34.757172 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:34:34.769195 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:34:34.772434 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:34:34.784450 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:34:34.790642 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 1 00:34:34.793667 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:34:34.814713 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 1 00:34:34.814890 kernel: libata version 3.00 loaded. Nov 1 00:34:34.814908 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:34:34.814918 kernel: AES CTR mode by8 optimization enabled Nov 1 00:34:34.814928 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:34:34.814938 kernel: GPT:9289727 != 19775487 Nov 1 00:34:34.814948 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:34:34.814957 kernel: GPT:9289727 != 19775487 Nov 1 00:34:34.814967 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:34:34.814977 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:34:34.793848 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:34:34.797889 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:34:34.802465 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:34:34.822645 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 00:34:34.822831 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 00:34:34.802746 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:34:34.828810 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 00:34:34.828975 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 00:34:34.810884 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:34:34.830393 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:34:34.840042 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (460) Nov 1 00:34:34.840129 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (461) Nov 1 00:34:34.852257 kernel: scsi host0: ahci Nov 1 00:34:34.853014 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 1 00:34:34.866399 kernel: scsi host1: ahci Nov 1 00:34:34.866568 kernel: scsi host2: ahci Nov 1 00:34:34.866738 kernel: scsi host3: ahci Nov 1 00:34:34.866881 kernel: scsi host4: ahci Nov 1 00:34:34.867028 kernel: scsi host5: ahci Nov 1 00:34:34.867166 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Nov 1 00:34:34.867177 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Nov 1 00:34:34.867191 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Nov 1 00:34:34.867201 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Nov 1 00:34:34.867211 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Nov 1 00:34:34.869429 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Nov 1 00:34:34.868926 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 1 00:34:34.878078 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 1 00:34:34.880304 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 1 00:34:34.890193 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 00:34:34.908713 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:34:34.910471 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:34:34.920967 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:34:34.910527 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:34:34.922842 disk-uuid[558]: Primary Header is updated. Nov 1 00:34:34.922842 disk-uuid[558]: Secondary Entries is updated. Nov 1 00:34:34.922842 disk-uuid[558]: Secondary Header is updated. Nov 1 00:34:34.927696 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:34:34.914505 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:34:34.917072 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:34:34.935157 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:34:34.945753 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:34:34.969174 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:34:35.176627 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 00:34:35.176705 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 00:34:35.184621 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:34:35.184667 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:34:35.187627 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:34:35.187643 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 1 00:34:35.189077 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 1 00:34:35.189090 kernel: ata3.00: applying bridge limits Nov 1 00:34:35.190727 kernel: ata3.00: configured for UDMA/100 Nov 1 00:34:35.191626 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 1 00:34:35.237566 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 1 00:34:35.237856 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 00:34:35.250630 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 1 00:34:35.929610 disk-uuid[559]: The operation has completed successfully. Nov 1 00:34:35.931774 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:34:35.954941 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:34:35.955065 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:34:35.985742 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:34:35.989449 sh[598]: Success Nov 1 00:34:36.001661 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 1 00:34:36.035374 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:34:36.054288 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:34:36.059268 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:34:36.069241 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:34:36.069267 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:34:36.069278 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:34:36.072373 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:34:36.072387 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:34:36.077929 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:34:36.079246 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:34:36.084778 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:34:36.087836 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:34:36.097670 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:34:36.097700 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:34:36.097711 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:34:36.101619 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:34:36.109874 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:34:36.112555 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:34:36.121181 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:34:36.128795 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:34:36.178840 ignition[688]: Ignition 2.19.0 Nov 1 00:34:36.178854 ignition[688]: Stage: fetch-offline Nov 1 00:34:36.178889 ignition[688]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:34:36.178899 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:34:36.179000 ignition[688]: parsed url from cmdline: "" Nov 1 00:34:36.179004 ignition[688]: no config URL provided Nov 1 00:34:36.179010 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:34:36.179019 ignition[688]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:34:36.179046 ignition[688]: op(1): [started] loading QEMU firmware config module Nov 1 00:34:36.179052 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 1 00:34:36.185767 ignition[688]: op(1): [finished] loading QEMU firmware config module Nov 1 00:34:36.220480 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:34:36.232745 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:34:36.254650 systemd-networkd[787]: lo: Link UP Nov 1 00:34:36.254659 systemd-networkd[787]: lo: Gained carrier Nov 1 00:34:36.256142 systemd-networkd[787]: Enumeration completed Nov 1 00:34:36.256239 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:34:36.256513 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:34:36.256517 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:34:36.257219 systemd-networkd[787]: eth0: Link UP Nov 1 00:34:36.257223 systemd-networkd[787]: eth0: Gained carrier Nov 1 00:34:36.257230 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:34:36.257872 systemd[1]: Reached target network.target - Network. Nov 1 00:34:36.272646 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.5/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:34:36.297825 ignition[688]: parsing config with SHA512: 590b736266dbc539c7323bad0802a91ee6975781d435e0ec73ce3969089bba5bfb981d1794978bd43397a6d2768dfa7eb33fde65e4f131bf94676733135228c8 Nov 1 00:34:36.301902 unknown[688]: fetched base config from "system" Nov 1 00:34:36.301915 unknown[688]: fetched user config from "qemu" Nov 1 00:34:36.302434 ignition[688]: fetch-offline: fetch-offline passed Nov 1 00:34:36.302552 ignition[688]: Ignition finished successfully Nov 1 00:34:36.306622 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:34:36.307652 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 00:34:36.318736 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:34:36.332406 ignition[791]: Ignition 2.19.0 Nov 1 00:34:36.332417 ignition[791]: Stage: kargs Nov 1 00:34:36.332570 ignition[791]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:34:36.332582 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:34:36.333391 ignition[791]: kargs: kargs passed Nov 1 00:34:36.333432 ignition[791]: Ignition finished successfully Nov 1 00:34:36.342470 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:34:36.356825 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:34:36.364382 systemd-resolved[231]: Detected conflict on linux IN A 10.0.0.5 Nov 1 00:34:36.364397 systemd-resolved[231]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Nov 1 00:34:36.371230 ignition[799]: Ignition 2.19.0 Nov 1 00:34:36.371241 ignition[799]: Stage: disks Nov 1 00:34:36.371398 ignition[799]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:34:36.371409 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:34:36.372165 ignition[799]: disks: disks passed Nov 1 00:34:36.372207 ignition[799]: Ignition finished successfully Nov 1 00:34:36.380260 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:34:36.381096 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:34:36.383949 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:34:36.387306 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:34:36.391025 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:34:36.394178 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:34:36.410730 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:34:36.426790 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 00:34:36.432876 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:34:36.447682 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:34:36.534637 kernel: EXT4-fs (vda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:34:36.535340 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:34:36.536688 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:34:36.553679 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:34:36.555093 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:34:36.557442 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 00:34:36.557478 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:34:36.557498 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:34:36.563414 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:34:36.565064 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:34:36.582620 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Nov 1 00:34:36.585007 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:34:36.585029 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:34:36.587429 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:34:36.590623 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:34:36.592091 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:34:36.606589 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:34:36.612579 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:34:36.617764 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:34:36.622879 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:34:36.706904 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:34:36.716690 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:34:36.719859 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:34:36.729684 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:34:36.742632 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:34:36.751859 ignition[932]: INFO : Ignition 2.19.0 Nov 1 00:34:36.751859 ignition[932]: INFO : Stage: mount Nov 1 00:34:36.754412 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:34:36.754412 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:34:36.754412 ignition[932]: INFO : mount: mount passed Nov 1 00:34:36.754412 ignition[932]: INFO : Ignition finished successfully Nov 1 00:34:36.755361 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:34:36.762749 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:34:37.067307 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:34:37.085772 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:34:37.092629 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (945) Nov 1 00:34:37.095938 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:34:37.095956 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:34:37.095966 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:34:37.100621 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:34:37.101530 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:34:37.122903 ignition[962]: INFO : Ignition 2.19.0 Nov 1 00:34:37.122903 ignition[962]: INFO : Stage: files Nov 1 00:34:37.125441 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:34:37.125441 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:34:37.125441 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:34:37.125441 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:34:37.125441 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:34:37.135352 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:34:37.135352 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:34:37.135352 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:34:37.135352 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:34:37.135352 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:34:37.127563 unknown[962]: wrote ssh authorized keys file for user: core Nov 1 00:34:37.179880 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:34:37.253073 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:34:37.256251 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:34:37.256251 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:34:37.256251 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:34:37.256251 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:34:37.256251 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:34:37.256251 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:34:37.256251 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:34:37.256251 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:34:37.256251 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:34:37.256251 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:34:37.256251 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:34:37.256251 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:34:37.256251 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:34:37.256251 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:34:37.329738 systemd-networkd[787]: eth0: Gained IPv6LL Nov 1 00:34:37.668700 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 00:34:37.986093 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:34:37.986093 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 00:34:37.991968 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:34:37.991968 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:34:37.991968 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 00:34:37.991968 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 1 00:34:37.991968 ignition[962]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:34:37.991968 ignition[962]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:34:37.991968 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 1 00:34:37.991968 ignition[962]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 1 00:34:38.020921 ignition[962]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:34:38.026165 ignition[962]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:34:38.028898 ignition[962]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 1 00:34:38.028898 ignition[962]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:34:38.028898 ignition[962]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:34:38.028898 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:34:38.028898 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:34:38.028898 ignition[962]: INFO : files: files passed Nov 1 00:34:38.028898 ignition[962]: INFO : Ignition finished successfully Nov 1 00:34:38.045591 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:34:38.059720 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:34:38.063418 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:34:38.064304 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:34:38.064422 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:34:38.080974 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Nov 1 00:34:38.085869 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:34:38.085869 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:34:38.090896 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:34:38.094533 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:34:38.095314 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:34:38.110709 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:34:38.135371 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:34:38.135488 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:34:38.139029 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:34:38.139998 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:34:38.144459 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:34:38.157714 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:34:38.173111 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:34:38.182850 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:34:38.191713 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:34:38.192449 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:34:38.196137 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:34:38.200175 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:34:38.200288 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:34:38.205455 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:34:38.206326 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:34:38.211117 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:34:38.213536 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:34:38.217134 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:34:38.220365 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:34:38.224044 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:34:38.227131 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:34:38.230663 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:34:38.233992 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:34:38.237002 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:34:38.237114 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:34:38.241780 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:34:38.242689 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:34:38.248468 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:34:38.251900 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:34:38.256188 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:34:38.256356 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:34:38.261178 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:34:38.261334 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:34:38.264907 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:34:38.266017 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:34:38.270689 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:34:38.271551 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:34:38.276073 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:34:38.278510 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:34:38.278624 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:34:38.281324 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:34:38.281406 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:34:38.284104 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:34:38.284212 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:34:38.287082 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:34:38.287181 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:34:38.304808 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:34:38.308131 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:34:38.308277 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:34:38.313069 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:34:38.315636 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:34:38.315865 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:34:38.325962 ignition[1016]: INFO : Ignition 2.19.0 Nov 1 00:34:38.325962 ignition[1016]: INFO : Stage: umount Nov 1 00:34:38.325962 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:34:38.325962 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:34:38.325962 ignition[1016]: INFO : umount: umount passed Nov 1 00:34:38.325962 ignition[1016]: INFO : Ignition finished successfully Nov 1 00:34:38.320164 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:34:38.320775 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:34:38.329200 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:34:38.329384 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:34:38.334335 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:34:38.334442 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:34:38.337133 systemd[1]: Stopped target network.target - Network. Nov 1 00:34:38.339334 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:34:38.339395 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:34:38.342795 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:34:38.342879 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:34:38.346179 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:34:38.346238 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:34:38.349947 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:34:38.350029 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:34:38.353720 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:34:38.357004 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:34:38.360661 systemd-networkd[787]: eth0: DHCPv6 lease lost Nov 1 00:34:38.361938 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:34:38.362955 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:34:38.363180 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:34:38.366453 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:34:38.366691 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:34:38.371750 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:34:38.371795 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:34:38.384693 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:34:38.387283 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:34:38.387337 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:34:38.389378 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:34:38.389426 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:34:38.392232 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:34:38.392280 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:34:38.395377 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:34:38.395424 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:34:38.399025 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:34:38.417206 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:34:38.417321 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:34:38.421084 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:34:38.421323 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:34:38.424635 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:34:38.424739 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:34:38.427225 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:34:38.427277 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:34:38.430562 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:34:38.430689 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:34:38.434689 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:34:38.434742 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:34:38.438373 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:34:38.438437 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:34:38.458782 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:34:38.461405 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:34:38.461478 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:34:38.465010 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:34:38.465071 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:34:38.469255 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:34:38.469363 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:34:38.537397 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:34:38.537526 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:34:38.540606 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:34:38.541338 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:34:38.541386 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:34:38.559707 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:34:38.569154 systemd[1]: Switching root. Nov 1 00:34:38.601549 systemd-journald[193]: Journal stopped Nov 1 00:34:39.838215 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Nov 1 00:34:39.838278 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:34:39.838291 kernel: SELinux: policy capability open_perms=1 Nov 1 00:34:39.838306 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:34:39.838317 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:34:39.838328 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:34:39.838343 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:34:39.838354 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:34:39.838365 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:34:39.838382 kernel: audit: type=1403 audit(1761957279.056:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:34:39.838394 systemd[1]: Successfully loaded SELinux policy in 41.121ms. Nov 1 00:34:39.838419 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.396ms. Nov 1 00:34:39.838439 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:34:39.838451 systemd[1]: Detected virtualization kvm. Nov 1 00:34:39.838462 systemd[1]: Detected architecture x86-64. Nov 1 00:34:39.838474 systemd[1]: Detected first boot. Nov 1 00:34:39.838492 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:34:39.838504 zram_generator::config[1061]: No configuration found. Nov 1 00:34:39.838518 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:34:39.838538 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:34:39.838553 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 00:34:39.838564 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:34:39.838577 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 00:34:39.838589 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 00:34:39.838611 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 00:34:39.838623 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 00:34:39.838635 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 00:34:39.838647 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 00:34:39.838659 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 00:34:39.838674 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 00:34:39.838685 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:34:39.838697 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:34:39.838709 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 00:34:39.838721 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 00:34:39.838732 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 00:34:39.838745 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:34:39.838756 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 00:34:39.838772 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:34:39.838784 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 00:34:39.838796 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 00:34:39.838808 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 00:34:39.838819 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 00:34:39.838831 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:34:39.838843 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:34:39.838855 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:34:39.838869 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:34:39.838880 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 00:34:39.838893 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 00:34:39.838904 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:34:39.838916 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:34:39.838927 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:34:39.838939 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 00:34:39.838950 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 00:34:39.838962 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 00:34:39.838976 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 00:34:39.838988 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:34:39.838999 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 00:34:39.839011 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 00:34:39.839025 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 00:34:39.839037 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:34:39.839049 systemd[1]: Reached target machines.target - Containers. Nov 1 00:34:39.839060 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 00:34:39.839072 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:34:39.839086 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:34:39.839098 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 00:34:39.839109 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:34:39.839121 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:34:39.839133 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:34:39.839144 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 00:34:39.839156 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:34:39.839167 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:34:39.839181 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:34:39.839193 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 00:34:39.839205 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:34:39.839222 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:34:39.839233 kernel: fuse: init (API version 7.39) Nov 1 00:34:39.839244 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:34:39.839256 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:34:39.839268 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:34:39.839280 kernel: loop: module loaded Nov 1 00:34:39.839294 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 00:34:39.839306 kernel: ACPI: bus type drm_connector registered Nov 1 00:34:39.839318 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:34:39.839346 systemd-journald[1138]: Collecting audit messages is disabled. Nov 1 00:34:39.839370 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:34:39.839381 systemd-journald[1138]: Journal started Nov 1 00:34:39.839405 systemd-journald[1138]: Runtime Journal (/run/log/journal/9d9044a8e19e4540ae5552868f213118) is 6.0M, max 48.3M, 42.2M free. Nov 1 00:34:39.554321 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:34:39.576126 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 1 00:34:39.576583 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:34:39.841075 systemd[1]: Stopped verity-setup.service. Nov 1 00:34:39.845616 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:34:39.848632 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:34:39.850667 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 00:34:39.852507 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 00:34:39.854433 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 00:34:39.856200 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 00:34:39.858096 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 00:34:39.860031 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 00:34:39.861894 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 00:34:39.864079 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:34:39.866416 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:34:39.866606 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 00:34:39.868878 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:34:39.869051 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:34:39.871293 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:34:39.871467 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:34:39.873515 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:34:39.873762 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:34:39.876062 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:34:39.876231 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 00:34:39.878528 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:34:39.878706 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:34:39.880776 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:34:39.882980 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:34:39.885530 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 00:34:39.898569 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:34:39.913692 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 00:34:39.916609 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 00:34:39.918349 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:34:39.918380 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:34:39.920922 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 00:34:39.923868 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 00:34:39.927949 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 00:34:39.930229 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:34:39.931660 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 00:34:39.934310 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 00:34:39.936221 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:34:39.937297 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 00:34:39.939407 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:34:39.942753 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:34:39.957931 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 00:34:39.965419 systemd-journald[1138]: Time spent on flushing to /var/log/journal/9d9044a8e19e4540ae5552868f213118 is 24.775ms for 993 entries. Nov 1 00:34:39.965419 systemd-journald[1138]: System Journal (/var/log/journal/9d9044a8e19e4540ae5552868f213118) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:34:40.010086 systemd-journald[1138]: Received client request to flush runtime journal. Nov 1 00:34:40.010151 kernel: loop0: detected capacity change from 0 to 140768 Nov 1 00:34:39.965745 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 00:34:39.971378 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:34:39.975895 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 00:34:39.977854 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 00:34:39.980967 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 00:34:39.983344 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 00:34:39.992252 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 00:34:40.009758 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 00:34:40.017357 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 00:34:40.020566 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 00:34:40.023031 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:34:40.026143 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 00:34:40.030789 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:34:40.041786 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:34:40.044507 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:34:40.045166 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 00:34:40.050166 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:34:40.066472 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Nov 1 00:34:40.066489 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Nov 1 00:34:40.070814 kernel: loop1: detected capacity change from 0 to 142488 Nov 1 00:34:40.073263 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:34:40.112624 kernel: loop2: detected capacity change from 0 to 224512 Nov 1 00:34:40.147646 kernel: loop3: detected capacity change from 0 to 140768 Nov 1 00:34:40.160673 kernel: loop4: detected capacity change from 0 to 142488 Nov 1 00:34:40.172625 kernel: loop5: detected capacity change from 0 to 224512 Nov 1 00:34:40.181406 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 1 00:34:40.181992 (sd-merge)[1200]: Merged extensions into '/usr'. Nov 1 00:34:40.185845 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 00:34:40.185859 systemd[1]: Reloading... Nov 1 00:34:40.232086 zram_generator::config[1225]: No configuration found. Nov 1 00:34:40.234175 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:34:40.358042 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:34:40.406766 systemd[1]: Reloading finished in 220 ms. Nov 1 00:34:40.441487 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 00:34:40.443793 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 00:34:40.459763 systemd[1]: Starting ensure-sysext.service... Nov 1 00:34:40.462098 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:34:40.468911 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Nov 1 00:34:40.468971 systemd[1]: Reloading... Nov 1 00:34:40.486227 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:34:40.486746 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 00:34:40.488047 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:34:40.488462 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Nov 1 00:34:40.488581 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Nov 1 00:34:40.493817 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:34:40.493834 systemd-tmpfiles[1264]: Skipping /boot Nov 1 00:34:40.507090 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:34:40.507104 systemd-tmpfiles[1264]: Skipping /boot Nov 1 00:34:40.512694 zram_generator::config[1290]: No configuration found. Nov 1 00:34:40.622513 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:34:40.671404 systemd[1]: Reloading finished in 201 ms. Nov 1 00:34:40.690060 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 00:34:40.703037 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:34:40.712296 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:34:40.715256 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 00:34:40.718204 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 00:34:40.722849 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:34:40.727773 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:34:40.731325 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 00:34:40.735356 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:34:40.735542 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:34:40.744802 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:34:40.750808 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:34:40.754564 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:34:40.756717 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:34:40.761662 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 00:34:40.761829 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Nov 1 00:34:40.763421 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:34:40.764581 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 00:34:40.767343 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:34:40.767522 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:34:40.769872 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:34:40.770044 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:34:40.772977 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:34:40.773138 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:34:40.780644 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:34:40.780842 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:34:40.788890 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 00:34:40.791589 augenrules[1359]: No rules Nov 1 00:34:40.793918 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:34:40.796916 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 00:34:40.799514 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 00:34:40.801740 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:34:40.808321 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:34:40.808515 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:34:40.815629 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:34:40.818857 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:34:40.824823 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:34:40.826442 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:34:40.831677 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:34:40.833324 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:34:40.834179 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 00:34:40.836729 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 00:34:40.839445 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:34:40.839641 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:34:40.842746 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:34:40.842917 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:34:40.845660 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:34:40.845828 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:34:40.863905 systemd[1]: Finished ensure-sysext.service. Nov 1 00:34:40.873645 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 1 00:34:40.874646 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:34:40.874794 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:34:40.881624 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1393) Nov 1 00:34:40.882768 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:34:40.883552 systemd-resolved[1333]: Positive Trust Anchors: Nov 1 00:34:40.883566 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:34:40.883610 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:34:40.887823 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:34:40.891752 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:34:40.894695 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:34:40.896406 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:34:40.899081 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 00:34:40.901676 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:34:40.901712 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:34:40.902201 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:34:40.902452 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:34:40.903266 systemd-resolved[1333]: Defaulting to hostname 'linux'. Nov 1 00:34:40.910860 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:34:40.913036 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:34:40.913254 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:34:40.925448 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:34:40.927575 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:34:40.932808 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:34:40.932996 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:34:40.935185 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:34:40.935373 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:34:40.938422 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:34:40.946839 systemd-networkd[1389]: lo: Link UP Nov 1 00:34:40.946848 systemd-networkd[1389]: lo: Gained carrier Nov 1 00:34:40.948719 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 00:34:40.948729 systemd-networkd[1389]: Enumeration completed Nov 1 00:34:40.949113 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:34:40.949118 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:34:40.949868 systemd-networkd[1389]: eth0: Link UP Nov 1 00:34:40.949872 systemd-networkd[1389]: eth0: Gained carrier Nov 1 00:34:40.949883 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:34:40.950999 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:34:40.952885 systemd[1]: Reached target network.target - Network. Nov 1 00:34:40.957629 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:34:40.958637 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.5/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:34:40.964425 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:34:40.963804 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 00:34:40.967361 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 00:34:40.979633 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 1 00:34:40.983145 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 00:34:40.983340 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 1 00:34:40.983534 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 00:34:40.987941 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 00:34:40.988724 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 00:34:40.992401 systemd-timesyncd[1408]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 1 00:34:40.992703 systemd-timesyncd[1408]: Initial clock synchronization to Sat 2025-11-01 00:34:40.975944 UTC. Nov 1 00:34:40.995588 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 00:34:41.001615 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 1 00:34:41.025634 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:34:41.026773 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:34:41.073779 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:34:41.128475 kernel: kvm_amd: TSC scaling supported Nov 1 00:34:41.128508 kernel: kvm_amd: Nested Virtualization enabled Nov 1 00:34:41.128521 kernel: kvm_amd: Nested Paging enabled Nov 1 00:34:41.130052 kernel: kvm_amd: LBR virtualization supported Nov 1 00:34:41.130069 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 1 00:34:41.130980 kernel: kvm_amd: Virtual GIF supported Nov 1 00:34:41.148619 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:34:41.177790 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 00:34:41.189815 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 00:34:41.199339 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:34:41.228997 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 00:34:41.231226 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:34:41.233044 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:34:41.234818 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 00:34:41.236810 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 00:34:41.239055 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 00:34:41.240827 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 00:34:41.242866 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 00:34:41.244861 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:34:41.244885 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:34:41.246327 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:34:41.248378 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 00:34:41.251625 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 00:34:41.261170 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 00:34:41.264000 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 00:34:41.266250 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 00:34:41.268012 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:34:41.269529 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:34:41.271092 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:34:41.271120 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:34:41.272061 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 00:34:41.274653 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 00:34:41.279689 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 00:34:41.286604 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:34:41.284838 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 00:34:41.286898 jq[1442]: false Nov 1 00:34:41.286726 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 00:34:41.288754 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 00:34:41.292708 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 00:34:41.296781 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 00:34:41.306345 extend-filesystems[1443]: Found loop3 Nov 1 00:34:41.306345 extend-filesystems[1443]: Found loop4 Nov 1 00:34:41.306345 extend-filesystems[1443]: Found loop5 Nov 1 00:34:41.306345 extend-filesystems[1443]: Found sr0 Nov 1 00:34:41.306345 extend-filesystems[1443]: Found vda Nov 1 00:34:41.306345 extend-filesystems[1443]: Found vda1 Nov 1 00:34:41.306345 extend-filesystems[1443]: Found vda2 Nov 1 00:34:41.306345 extend-filesystems[1443]: Found vda3 Nov 1 00:34:41.306345 extend-filesystems[1443]: Found usr Nov 1 00:34:41.306345 extend-filesystems[1443]: Found vda4 Nov 1 00:34:41.306345 extend-filesystems[1443]: Found vda6 Nov 1 00:34:41.306345 extend-filesystems[1443]: Found vda7 Nov 1 00:34:41.306345 extend-filesystems[1443]: Found vda9 Nov 1 00:34:41.306345 extend-filesystems[1443]: Checking size of /dev/vda9 Nov 1 00:34:41.359858 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1390) Nov 1 00:34:41.359884 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 1 00:34:41.359898 extend-filesystems[1443]: Resized partition /dev/vda9 Nov 1 00:34:41.318860 dbus-daemon[1441]: [system] SELinux support is enabled Nov 1 00:34:41.306778 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 00:34:41.363516 extend-filesystems[1473]: resize2fs 1.47.1 (20-May-2024) Nov 1 00:34:41.312739 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 00:34:41.313813 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:34:41.314233 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:34:41.378184 update_engine[1458]: I20251101 00:34:41.334854 1458 main.cc:92] Flatcar Update Engine starting Nov 1 00:34:41.378184 update_engine[1458]: I20251101 00:34:41.340747 1458 update_check_scheduler.cc:74] Next update check in 4m39s Nov 1 00:34:41.316830 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 00:34:41.321717 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 00:34:41.378658 jq[1460]: true Nov 1 00:34:41.325655 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 00:34:41.329293 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 00:34:41.334100 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:34:41.334308 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 00:34:41.379184 jq[1466]: true Nov 1 00:34:41.334669 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:34:41.334861 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 00:34:41.339984 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:34:41.340208 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 00:34:41.362961 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 00:34:41.363513 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:34:41.363536 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 00:34:41.364287 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:34:41.364304 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 00:34:41.369044 systemd[1]: Started update-engine.service - Update Engine. Nov 1 00:34:41.373393 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 00:34:41.393621 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 1 00:34:41.397380 tar[1465]: linux-amd64/LICENSE Nov 1 00:34:41.420645 tar[1465]: linux-amd64/helm Nov 1 00:34:41.417477 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:34:41.420800 extend-filesystems[1473]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:34:41.420800 extend-filesystems[1473]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 1 00:34:41.420800 extend-filesystems[1473]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 1 00:34:41.417734 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 00:34:41.426231 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Nov 1 00:34:41.432417 systemd-logind[1455]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:34:41.432441 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:34:41.433017 systemd-logind[1455]: New seat seat0. Nov 1 00:34:41.435080 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 00:34:41.445881 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:34:41.447405 locksmithd[1474]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:34:41.448108 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 00:34:41.451312 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 1 00:34:41.453939 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:34:41.477872 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 00:34:41.487819 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 00:34:41.497243 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:34:41.497447 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 00:34:41.510083 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 00:34:41.521705 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 00:34:41.529897 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 00:34:41.532785 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 00:34:41.535275 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 00:34:41.565167 containerd[1468]: time="2025-11-01T00:34:41.565076727Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 00:34:41.586649 containerd[1468]: time="2025-11-01T00:34:41.586607196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:34:41.588316 containerd[1468]: time="2025-11-01T00:34:41.588280550Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:34:41.588316 containerd[1468]: time="2025-11-01T00:34:41.588308834Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:34:41.588366 containerd[1468]: time="2025-11-01T00:34:41.588324408Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:34:41.588529 containerd[1468]: time="2025-11-01T00:34:41.588506887Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 00:34:41.588567 containerd[1468]: time="2025-11-01T00:34:41.588528076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 00:34:41.588645 containerd[1468]: time="2025-11-01T00:34:41.588621156Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:34:41.588645 containerd[1468]: time="2025-11-01T00:34:41.588640683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:34:41.588893 containerd[1468]: time="2025-11-01T00:34:41.588861155Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:34:41.588893 containerd[1468]: time="2025-11-01T00:34:41.588887248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:34:41.588933 containerd[1468]: time="2025-11-01T00:34:41.588900889Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:34:41.588933 containerd[1468]: time="2025-11-01T00:34:41.588912239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:34:41.589034 containerd[1468]: time="2025-11-01T00:34:41.589011295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:34:41.589271 containerd[1468]: time="2025-11-01T00:34:41.589248001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:34:41.589395 containerd[1468]: time="2025-11-01T00:34:41.589372099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:34:41.589395 containerd[1468]: time="2025-11-01T00:34:41.589388402Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:34:41.589503 containerd[1468]: time="2025-11-01T00:34:41.589482664Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:34:41.589559 containerd[1468]: time="2025-11-01T00:34:41.589537562Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:34:41.594498 containerd[1468]: time="2025-11-01T00:34:41.594464574Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:34:41.594534 containerd[1468]: time="2025-11-01T00:34:41.594510794Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:34:41.594554 containerd[1468]: time="2025-11-01T00:34:41.594535635Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 00:34:41.594573 containerd[1468]: time="2025-11-01T00:34:41.594552440Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 00:34:41.594573 containerd[1468]: time="2025-11-01T00:34:41.594567763Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:34:41.594733 containerd[1468]: time="2025-11-01T00:34:41.594702220Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:34:41.594970 containerd[1468]: time="2025-11-01T00:34:41.594944791Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:34:41.595195 containerd[1468]: time="2025-11-01T00:34:41.595162370Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 00:34:41.595195 containerd[1468]: time="2025-11-01T00:34:41.595183859Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 00:34:41.595248 containerd[1468]: time="2025-11-01T00:34:41.595196570Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 00:34:41.595248 containerd[1468]: time="2025-11-01T00:34:41.595211132Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:34:41.595248 containerd[1468]: time="2025-11-01T00:34:41.595223864Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:34:41.595248 containerd[1468]: time="2025-11-01T00:34:41.595236314Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:34:41.595325 containerd[1468]: time="2025-11-01T00:34:41.595249005Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:34:41.595325 containerd[1468]: time="2025-11-01T00:34:41.595263408Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:34:41.595325 containerd[1468]: time="2025-11-01T00:34:41.595277930Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:34:41.595325 containerd[1468]: time="2025-11-01T00:34:41.595291112Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:34:41.595325 containerd[1468]: time="2025-11-01T00:34:41.595302862Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:34:41.595325 containerd[1468]: time="2025-11-01T00:34:41.595322709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:34:41.595429 containerd[1468]: time="2025-11-01T00:34:41.595340695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:34:41.595429 containerd[1468]: time="2025-11-01T00:34:41.595354247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:34:41.595429 containerd[1468]: time="2025-11-01T00:34:41.595366837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:34:41.595429 containerd[1468]: time="2025-11-01T00:34:41.595378758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:34:41.595429 containerd[1468]: time="2025-11-01T00:34:41.595391959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:34:41.595429 containerd[1468]: time="2025-11-01T00:34:41.595403639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:34:41.595429 containerd[1468]: time="2025-11-01T00:34:41.595416361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:34:41.595429 containerd[1468]: time="2025-11-01T00:34:41.595429592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 00:34:41.595577 containerd[1468]: time="2025-11-01T00:34:41.595444015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 00:34:41.595577 containerd[1468]: time="2025-11-01T00:34:41.595456836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:34:41.595577 containerd[1468]: time="2025-11-01T00:34:41.595468456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 00:34:41.595577 containerd[1468]: time="2025-11-01T00:34:41.595488233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:34:41.595577 containerd[1468]: time="2025-11-01T00:34:41.595503166Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 00:34:41.595577 containerd[1468]: time="2025-11-01T00:34:41.595531131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 00:34:41.595577 containerd[1468]: time="2025-11-01T00:34:41.595543431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:34:41.595577 containerd[1468]: time="2025-11-01T00:34:41.595555481Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:34:41.595746 containerd[1468]: time="2025-11-01T00:34:41.595621018Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:34:41.595746 containerd[1468]: time="2025-11-01T00:34:41.595638444Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 00:34:41.595746 containerd[1468]: time="2025-11-01T00:34:41.595648453Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:34:41.595746 containerd[1468]: time="2025-11-01T00:34:41.595659922Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 00:34:41.595746 containerd[1468]: time="2025-11-01T00:34:41.595669010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:34:41.595746 containerd[1468]: time="2025-11-01T00:34:41.595680400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 00:34:41.595746 containerd[1468]: time="2025-11-01T00:34:41.595689959Z" level=info msg="NRI interface is disabled by configuration." Nov 1 00:34:41.595746 containerd[1468]: time="2025-11-01T00:34:41.595700277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:34:41.596891 containerd[1468]: time="2025-11-01T00:34:41.595953047Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:34:41.596891 containerd[1468]: time="2025-11-01T00:34:41.596027041Z" level=info msg="Connect containerd service" Nov 1 00:34:41.596891 containerd[1468]: time="2025-11-01T00:34:41.596059300Z" level=info msg="using legacy CRI server" Nov 1 00:34:41.596891 containerd[1468]: time="2025-11-01T00:34:41.596066345Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 00:34:41.596891 containerd[1468]: time="2025-11-01T00:34:41.596170326Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:34:41.596891 containerd[1468]: time="2025-11-01T00:34:41.596691149Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:34:41.597147 containerd[1468]: time="2025-11-01T00:34:41.597004681Z" level=info msg="Start subscribing containerd event" Nov 1 00:34:41.598623 containerd[1468]: time="2025-11-01T00:34:41.598336418Z" level=info msg="Start recovering state" Nov 1 00:34:41.598623 containerd[1468]: time="2025-11-01T00:34:41.598395990Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:34:41.598623 containerd[1468]: time="2025-11-01T00:34:41.598422353Z" level=info msg="Start event monitor" Nov 1 00:34:41.598623 containerd[1468]: time="2025-11-01T00:34:41.598444282Z" level=info msg="Start snapshots syncer" Nov 1 00:34:41.598623 containerd[1468]: time="2025-11-01T00:34:41.598460146Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:34:41.598623 containerd[1468]: time="2025-11-01T00:34:41.598465841Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:34:41.598623 containerd[1468]: time="2025-11-01T00:34:41.598468693Z" level=info msg="Start streaming server" Nov 1 00:34:41.598623 containerd[1468]: time="2025-11-01T00:34:41.598535411Z" level=info msg="containerd successfully booted in 0.034449s" Nov 1 00:34:41.598908 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 00:34:41.823827 tar[1465]: linux-amd64/README.md Nov 1 00:34:41.839945 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 00:34:42.321718 systemd-networkd[1389]: eth0: Gained IPv6LL Nov 1 00:34:42.324583 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 00:34:42.327098 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 00:34:42.338795 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 1 00:34:42.341506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:34:42.344313 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 00:34:42.365278 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 00:34:42.367511 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 1 00:34:42.367856 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 1 00:34:42.371076 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 00:34:43.050781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:34:43.053149 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 00:34:43.055552 systemd[1]: Startup finished in 881ms (kernel) + 5.341s (initrd) + 4.039s (userspace) = 10.261s. Nov 1 00:34:43.056117 (kubelet)[1553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:34:43.451238 kubelet[1553]: E1101 00:34:43.451047 1553 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:34:43.455279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:34:43.455483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:34:46.296740 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 00:34:46.297980 systemd[1]: Started sshd@0-10.0.0.5:22-10.0.0.1:50912.service - OpenSSH per-connection server daemon (10.0.0.1:50912). Nov 1 00:34:46.342315 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 50912 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:34:46.344166 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:34:46.352520 systemd-logind[1455]: New session 1 of user core. Nov 1 00:34:46.354023 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 00:34:46.363829 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 00:34:46.374787 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 00:34:46.377696 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 00:34:46.385324 (systemd)[1571]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:34:46.498852 systemd[1571]: Queued start job for default target default.target. Nov 1 00:34:46.510904 systemd[1571]: Created slice app.slice - User Application Slice. Nov 1 00:34:46.510931 systemd[1571]: Reached target paths.target - Paths. Nov 1 00:34:46.510945 systemd[1571]: Reached target timers.target - Timers. Nov 1 00:34:46.512363 systemd[1571]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 00:34:46.523371 systemd[1571]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 00:34:46.523518 systemd[1571]: Reached target sockets.target - Sockets. Nov 1 00:34:46.523537 systemd[1571]: Reached target basic.target - Basic System. Nov 1 00:34:46.523575 systemd[1571]: Reached target default.target - Main User Target. Nov 1 00:34:46.523623 systemd[1571]: Startup finished in 130ms. Nov 1 00:34:46.523943 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 00:34:46.525552 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 00:34:46.584566 systemd[1]: Started sshd@1-10.0.0.5:22-10.0.0.1:50916.service - OpenSSH per-connection server daemon (10.0.0.1:50916). Nov 1 00:34:46.624083 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 50916 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:34:46.625817 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:34:46.629947 systemd-logind[1455]: New session 2 of user core. Nov 1 00:34:46.644741 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 00:34:46.699996 sshd[1582]: pam_unix(sshd:session): session closed for user core Nov 1 00:34:46.707206 systemd[1]: sshd@1-10.0.0.5:22-10.0.0.1:50916.service: Deactivated successfully. Nov 1 00:34:46.708927 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:34:46.710453 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:34:46.711701 systemd[1]: Started sshd@2-10.0.0.5:22-10.0.0.1:50922.service - OpenSSH per-connection server daemon (10.0.0.1:50922). Nov 1 00:34:46.712380 systemd-logind[1455]: Removed session 2. Nov 1 00:34:46.751970 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 50922 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:34:46.753389 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:34:46.756919 systemd-logind[1455]: New session 3 of user core. Nov 1 00:34:46.771701 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 00:34:46.820708 sshd[1589]: pam_unix(sshd:session): session closed for user core Nov 1 00:34:46.831356 systemd[1]: sshd@2-10.0.0.5:22-10.0.0.1:50922.service: Deactivated successfully. Nov 1 00:34:46.833063 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:34:46.834546 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:34:46.835773 systemd[1]: Started sshd@3-10.0.0.5:22-10.0.0.1:50924.service - OpenSSH per-connection server daemon (10.0.0.1:50924). Nov 1 00:34:46.836565 systemd-logind[1455]: Removed session 3. Nov 1 00:34:46.870702 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 50924 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:34:46.872239 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:34:46.875831 systemd-logind[1455]: New session 4 of user core. Nov 1 00:34:46.895714 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 00:34:46.950062 sshd[1596]: pam_unix(sshd:session): session closed for user core Nov 1 00:34:46.959377 systemd[1]: sshd@3-10.0.0.5:22-10.0.0.1:50924.service: Deactivated successfully. Nov 1 00:34:46.961366 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:34:46.963170 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:34:46.971842 systemd[1]: Started sshd@4-10.0.0.5:22-10.0.0.1:50930.service - OpenSSH per-connection server daemon (10.0.0.1:50930). Nov 1 00:34:46.972720 systemd-logind[1455]: Removed session 4. Nov 1 00:34:47.001568 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 50930 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:34:47.002995 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:34:47.006559 systemd-logind[1455]: New session 5 of user core. Nov 1 00:34:47.015689 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 00:34:47.073469 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:34:47.073824 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:34:47.089141 sudo[1606]: pam_unix(sudo:session): session closed for user root Nov 1 00:34:47.091194 sshd[1603]: pam_unix(sshd:session): session closed for user core Nov 1 00:34:47.103421 systemd[1]: sshd@4-10.0.0.5:22-10.0.0.1:50930.service: Deactivated successfully. Nov 1 00:34:47.105054 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:34:47.106628 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:34:47.107911 systemd[1]: Started sshd@5-10.0.0.5:22-10.0.0.1:50942.service - OpenSSH per-connection server daemon (10.0.0.1:50942). Nov 1 00:34:47.108695 systemd-logind[1455]: Removed session 5. Nov 1 00:34:47.153875 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 50942 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:34:47.155359 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:34:47.158838 systemd-logind[1455]: New session 6 of user core. Nov 1 00:34:47.168701 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 00:34:47.221998 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:34:47.222334 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:34:47.225922 sudo[1615]: pam_unix(sudo:session): session closed for user root Nov 1 00:34:47.232698 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:34:47.233072 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:34:47.254792 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 00:34:47.256407 auditctl[1618]: No rules Nov 1 00:34:47.257638 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:34:47.257886 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 00:34:47.259545 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:34:47.288864 augenrules[1636]: No rules Nov 1 00:34:47.290489 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:34:47.292087 sudo[1614]: pam_unix(sudo:session): session closed for user root Nov 1 00:34:47.293814 sshd[1611]: pam_unix(sshd:session): session closed for user core Nov 1 00:34:47.308350 systemd[1]: sshd@5-10.0.0.5:22-10.0.0.1:50942.service: Deactivated successfully. Nov 1 00:34:47.310307 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:34:47.312176 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:34:47.321845 systemd[1]: Started sshd@6-10.0.0.5:22-10.0.0.1:50956.service - OpenSSH per-connection server daemon (10.0.0.1:50956). Nov 1 00:34:47.322701 systemd-logind[1455]: Removed session 6. Nov 1 00:34:47.351587 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 50956 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:34:47.353042 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:34:47.356565 systemd-logind[1455]: New session 7 of user core. Nov 1 00:34:47.365774 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 00:34:47.418835 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:34:47.419167 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:34:47.695845 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 00:34:47.695979 (dockerd)[1665]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 00:34:47.958578 dockerd[1665]: time="2025-11-01T00:34:47.958436284Z" level=info msg="Starting up" Nov 1 00:34:48.583687 dockerd[1665]: time="2025-11-01T00:34:48.583638839Z" level=info msg="Loading containers: start." Nov 1 00:34:48.688624 kernel: Initializing XFRM netlink socket Nov 1 00:34:48.757733 systemd-networkd[1389]: docker0: Link UP Nov 1 00:34:48.777983 dockerd[1665]: time="2025-11-01T00:34:48.777940039Z" level=info msg="Loading containers: done." Nov 1 00:34:48.790788 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3369376037-merged.mount: Deactivated successfully. Nov 1 00:34:48.792725 dockerd[1665]: time="2025-11-01T00:34:48.792676897Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:34:48.792831 dockerd[1665]: time="2025-11-01T00:34:48.792782067Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 00:34:48.792922 dockerd[1665]: time="2025-11-01T00:34:48.792905169Z" level=info msg="Daemon has completed initialization" Nov 1 00:34:48.830071 dockerd[1665]: time="2025-11-01T00:34:48.829996543Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:34:48.830190 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 00:34:49.591168 containerd[1468]: time="2025-11-01T00:34:49.591129967Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:34:50.136859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1921050802.mount: Deactivated successfully. Nov 1 00:34:51.023418 containerd[1468]: time="2025-11-01T00:34:51.023373804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:34:51.024163 containerd[1468]: time="2025-11-01T00:34:51.024137203Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 1 00:34:51.025299 containerd[1468]: time="2025-11-01T00:34:51.025274065Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:34:51.029137 containerd[1468]: time="2025-11-01T00:34:51.029096196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:34:51.030109 containerd[1468]: time="2025-11-01T00:34:51.030067825Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.438898888s" Nov 1 00:34:51.030156 containerd[1468]: time="2025-11-01T00:34:51.030111254Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 00:34:51.030642 containerd[1468]: time="2025-11-01T00:34:51.030619960Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:34:52.100497 containerd[1468]: time="2025-11-01T00:34:52.100448085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:34:52.101183 containerd[1468]: time="2025-11-01T00:34:52.101129696Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 1 00:34:52.102236 containerd[1468]: time="2025-11-01T00:34:52.102203579Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:34:52.104861 containerd[1468]: time="2025-11-01T00:34:52.104824437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:34:52.105881 containerd[1468]: time="2025-11-01T00:34:52.105841892Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.075193643s" Nov 1 00:34:52.105925 containerd[1468]: time="2025-11-01T00:34:52.105879524Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 00:34:52.106345 containerd[1468]: time="2025-11-01T00:34:52.106326561Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:34:53.635476 containerd[1468]: time="2025-11-01T00:34:53.635412264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:34:53.636194 containerd[1468]: time="2025-11-01T00:34:53.636127924Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 1 00:34:53.637245 containerd[1468]: time="2025-11-01T00:34:53.637210082Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:34:53.639716 containerd[1468]: time="2025-11-01T00:34:53.639684805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:34:53.640927 containerd[1468]: time="2025-11-01T00:34:53.640900551Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.534544628s" Nov 1 00:34:53.640976 containerd[1468]: time="2025-11-01T00:34:53.640927518Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 00:34:53.641389 containerd[1468]: time="2025-11-01T00:34:53.641364575Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:34:53.705712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:34:53.715739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:34:53.875321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:34:53.879449 (kubelet)[1888]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:34:53.921644 kubelet[1888]: E1101 00:34:53.921509 1888 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:34:53.928003 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:34:53.928205 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:34:55.038297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount658932159.mount: Deactivated successfully. Nov 1 00:34:55.895048 containerd[1468]: time="2025-11-01T00:34:55.894980408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:34:55.895741 containerd[1468]: time="2025-11-01T00:34:55.895682178Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 1 00:34:55.896968 containerd[1468]: time="2025-11-01T00:34:55.896919604Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:34:55.898945 containerd[1468]: time="2025-11-01T00:34:55.898899831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:34:55.899658 containerd[1468]: time="2025-11-01T00:34:55.899590985Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.258197989s" Nov 1 00:34:55.899694 containerd[1468]: time="2025-11-01T00:34:55.899659847Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 00:34:55.900169 containerd[1468]: time="2025-11-01T00:34:55.900148513Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:34:56.383527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2892233345.mount: Deactivated successfully. Nov 1 00:34:57.052809 containerd[1468]: time="2025-11-01T00:34:57.052754280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:34:57.053676 containerd[1468]: time="2025-11-01T00:34:57.053581735Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 1 00:34:57.054919 containerd[1468]: time="2025-11-01T00:34:57.054867616Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:34:57.057648 containerd[1468]: time="2025-11-01T00:34:57.057607522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:34:57.058819 containerd[1468]: time="2025-11-01T00:34:57.058768845Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.15859334s" Nov 1 00:34:57.058819 containerd[1468]: time="2025-11-01T00:34:57.058811939Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 00:34:57.059332 containerd[1468]: time="2025-11-01T00:34:57.059306921Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:34:57.607898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3035516656.mount: Deactivated successfully. Nov 1 00:34:57.613559 containerd[1468]: time="2025-11-01T00:34:57.613524592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:34:57.614230 containerd[1468]: time="2025-11-01T00:34:57.614179636Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 1 00:34:57.615345 containerd[1468]: time="2025-11-01T00:34:57.615304943Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:34:57.617329 containerd[1468]: time="2025-11-01T00:34:57.617291277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:34:57.618064 containerd[1468]: time="2025-11-01T00:34:57.618025430Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 558.687212ms" Nov 1 00:34:57.618109 containerd[1468]: time="2025-11-01T00:34:57.618060123Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:34:57.618557 containerd[1468]: time="2025-11-01T00:34:57.618519760Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:34:58.113278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2105803978.mount: Deactivated successfully. Nov 1 00:35:00.050031 containerd[1468]: time="2025-11-01T00:35:00.049970133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:35:00.050613 containerd[1468]: time="2025-11-01T00:35:00.050548908Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 1 00:35:00.051742 containerd[1468]: time="2025-11-01T00:35:00.051711695Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:35:00.054727 containerd[1468]: time="2025-11-01T00:35:00.054674466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:35:00.056012 containerd[1468]: time="2025-11-01T00:35:00.055978516Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.437425456s" Nov 1 00:35:00.056012 containerd[1468]: time="2025-11-01T00:35:00.056009255Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 00:35:02.850753 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:35:02.863795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:35:02.887149 systemd[1]: Reloading requested from client PID 2046 ('systemctl') (unit session-7.scope)... Nov 1 00:35:02.887165 systemd[1]: Reloading... Nov 1 00:35:02.954660 zram_generator::config[2088]: No configuration found. Nov 1 00:35:03.181413 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:35:03.257223 systemd[1]: Reloading finished in 369 ms. Nov 1 00:35:03.304435 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:35:03.304538 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:35:03.304808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:35:03.307362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:35:03.469207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:35:03.473428 (kubelet)[2134]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:35:03.513617 kubelet[2134]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:35:03.513617 kubelet[2134]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:35:03.513617 kubelet[2134]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:35:03.513617 kubelet[2134]: I1101 00:35:03.511417 2134 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:35:04.014174 kubelet[2134]: I1101 00:35:04.014125 2134 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:35:04.014174 kubelet[2134]: I1101 00:35:04.014157 2134 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:35:04.014448 kubelet[2134]: I1101 00:35:04.014420 2134 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:35:04.034101 kubelet[2134]: E1101 00:35:04.034051 2134 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:35:04.035038 kubelet[2134]: I1101 00:35:04.035001 2134 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:35:04.043224 kubelet[2134]: E1101 00:35:04.043189 2134 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:35:04.043224 kubelet[2134]: I1101 00:35:04.043223 2134 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:35:04.048417 kubelet[2134]: I1101 00:35:04.048390 2134 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:35:04.049494 kubelet[2134]: I1101 00:35:04.049447 2134 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:35:04.049687 kubelet[2134]: I1101 00:35:04.049483 2134 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:35:04.049768 kubelet[2134]: I1101 00:35:04.049688 2134 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:35:04.049768 kubelet[2134]: I1101 00:35:04.049699 2134 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:35:04.049852 kubelet[2134]: I1101 00:35:04.049835 2134 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:35:04.052773 kubelet[2134]: I1101 00:35:04.052746 2134 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:35:04.052773 kubelet[2134]: I1101 00:35:04.052772 2134 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:35:04.052820 kubelet[2134]: I1101 00:35:04.052790 2134 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:35:04.052820 kubelet[2134]: I1101 00:35:04.052800 2134 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:35:04.056680 kubelet[2134]: I1101 00:35:04.056217 2134 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:35:04.056680 kubelet[2134]: W1101 00:35:04.056469 2134 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused Nov 1 00:35:04.056680 kubelet[2134]: E1101 00:35:04.056512 2134 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:35:04.056680 kubelet[2134]: I1101 00:35:04.056636 2134 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:35:04.057282 kubelet[2134]: W1101 00:35:04.057224 2134 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused Nov 1 00:35:04.057282 kubelet[2134]: E1101 00:35:04.057274 2134 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:35:04.057674 kubelet[2134]: W1101 00:35:04.057649 2134 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:35:04.059845 kubelet[2134]: I1101 00:35:04.059814 2134 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:35:04.059886 kubelet[2134]: I1101 00:35:04.059850 2134 server.go:1287] "Started kubelet" Nov 1 00:35:04.061545 kubelet[2134]: I1101 00:35:04.059955 2134 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:35:04.061545 kubelet[2134]: I1101 00:35:04.060819 2134 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:35:04.061649 kubelet[2134]: I1101 00:35:04.061584 2134 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:35:04.061934 kubelet[2134]: I1101 00:35:04.061916 2134 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:35:04.062420 kubelet[2134]: I1101 00:35:04.062400 2134 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:35:04.063371 kubelet[2134]: I1101 00:35:04.062480 2134 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:35:04.063371 kubelet[2134]: E1101 00:35:04.062511 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:35:04.063371 kubelet[2134]: I1101 00:35:04.062557 2134 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:35:04.063371 kubelet[2134]: I1101 00:35:04.063275 2134 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:35:04.063371 kubelet[2134]: I1101 00:35:04.063294 2134 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:35:04.063789 kubelet[2134]: W1101 00:35:04.063745 2134 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused Nov 1 00:35:04.063831 kubelet[2134]: E1101 00:35:04.063795 2134 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:35:04.064162 kubelet[2134]: I1101 00:35:04.064135 2134 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:35:04.064666 kubelet[2134]: E1101 00:35:04.063703 2134 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873bad531e4195f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 00:35:04.059828575 +0000 UTC m=+0.582574167,LastTimestamp:2025-11-01 00:35:04.059828575 +0000 UTC m=+0.582574167,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 00:35:04.064741 kubelet[2134]: E1101 00:35:04.064723 2134 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="200ms" Nov 1 00:35:04.065788 kubelet[2134]: I1101 00:35:04.065771 2134 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:35:04.065788 kubelet[2134]: I1101 00:35:04.065783 2134 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:35:04.067776 kubelet[2134]: E1101 00:35:04.067745 2134 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:35:04.080253 kubelet[2134]: I1101 00:35:04.080205 2134 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:35:04.081722 kubelet[2134]: I1101 00:35:04.081698 2134 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:35:04.081722 kubelet[2134]: I1101 00:35:04.081722 2134 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:35:04.081792 kubelet[2134]: I1101 00:35:04.081739 2134 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:35:04.081792 kubelet[2134]: I1101 00:35:04.081748 2134 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:35:04.081831 kubelet[2134]: E1101 00:35:04.081797 2134 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:35:04.083123 kubelet[2134]: W1101 00:35:04.082565 2134 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused Nov 1 00:35:04.083123 kubelet[2134]: E1101 00:35:04.082610 2134 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:35:04.083711 kubelet[2134]: I1101 00:35:04.083685 2134 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:35:04.083711 kubelet[2134]: I1101 00:35:04.083704 2134 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:35:04.083773 kubelet[2134]: I1101 00:35:04.083740 2134 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:35:04.162783 kubelet[2134]: E1101 00:35:04.162742 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:35:04.182116 kubelet[2134]: E1101 00:35:04.182084 2134 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:35:04.263341 kubelet[2134]: E1101 00:35:04.263300 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:35:04.265944 kubelet[2134]: E1101 00:35:04.265856 2134 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="400ms" Nov 1 00:35:04.364066 kubelet[2134]: E1101 00:35:04.364024 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:35:04.382428 kubelet[2134]: E1101 00:35:04.382383 2134 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:35:04.464699 kubelet[2134]: E1101 00:35:04.464666 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:35:04.565191 kubelet[2134]: E1101 00:35:04.565103 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:35:04.589078 kubelet[2134]: I1101 00:35:04.589045 2134 policy_none.go:49] "None policy: Start" Nov 1 00:35:04.589131 kubelet[2134]: I1101 00:35:04.589082 2134 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:35:04.589131 kubelet[2134]: I1101 00:35:04.589103 2134 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:35:04.594726 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 00:35:04.611466 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 00:35:04.614346 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 00:35:04.621504 kubelet[2134]: I1101 00:35:04.621480 2134 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:35:04.621729 kubelet[2134]: I1101 00:35:04.621712 2134 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:35:04.621766 kubelet[2134]: I1101 00:35:04.621728 2134 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:35:04.622295 kubelet[2134]: I1101 00:35:04.621967 2134 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:35:04.622682 kubelet[2134]: E1101 00:35:04.622662 2134 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:35:04.622717 kubelet[2134]: E1101 00:35:04.622708 2134 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 1 00:35:04.666318 kubelet[2134]: E1101 00:35:04.666282 2134 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="800ms" Nov 1 00:35:04.723226 kubelet[2134]: I1101 00:35:04.723195 2134 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:35:04.723446 kubelet[2134]: E1101 00:35:04.723421 2134 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Nov 1 00:35:04.789699 systemd[1]: Created slice kubepods-burstable-pod639c64a68c7aeb5708b93264671df53e.slice - libcontainer container kubepods-burstable-pod639c64a68c7aeb5708b93264671df53e.slice. Nov 1 00:35:04.798383 kubelet[2134]: E1101 00:35:04.798352 2134 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:35:04.800480 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Nov 1 00:35:04.801922 kubelet[2134]: E1101 00:35:04.801895 2134 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:35:04.819194 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Nov 1 00:35:04.820830 kubelet[2134]: E1101 00:35:04.820804 2134 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:35:04.867163 kubelet[2134]: I1101 00:35:04.867137 2134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:35:04.867212 kubelet[2134]: I1101 00:35:04.867169 2134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:35:04.867212 kubelet[2134]: I1101 00:35:04.867193 2134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:35:04.867476 kubelet[2134]: I1101 00:35:04.867459 2134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/639c64a68c7aeb5708b93264671df53e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"639c64a68c7aeb5708b93264671df53e\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:35:04.867504 kubelet[2134]: I1101 00:35:04.867480 2134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/639c64a68c7aeb5708b93264671df53e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"639c64a68c7aeb5708b93264671df53e\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:35:04.867504 kubelet[2134]: I1101 00:35:04.867496 2134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:35:04.867563 kubelet[2134]: I1101 00:35:04.867516 2134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:35:04.867563 kubelet[2134]: I1101 00:35:04.867532 2134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:35:04.867563 kubelet[2134]: I1101 00:35:04.867548 2134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/639c64a68c7aeb5708b93264671df53e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"639c64a68c7aeb5708b93264671df53e\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:35:04.924952 kubelet[2134]: I1101 00:35:04.924922 2134 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:35:04.925253 kubelet[2134]: E1101 00:35:04.925223 2134 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Nov 1 00:35:05.098997 kubelet[2134]: E1101 00:35:05.098875 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:05.099422 containerd[1468]: time="2025-11-01T00:35:05.099385108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:639c64a68c7aeb5708b93264671df53e,Namespace:kube-system,Attempt:0,}" Nov 1 00:35:05.102674 kubelet[2134]: E1101 00:35:05.102656 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:05.102955 containerd[1468]: time="2025-11-01T00:35:05.102925349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 1 00:35:05.121229 kubelet[2134]: E1101 00:35:05.121203 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:05.121640 containerd[1468]: time="2025-11-01T00:35:05.121497456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 1 00:35:05.278915 kubelet[2134]: W1101 00:35:05.278870 2134 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused Nov 1 00:35:05.278970 kubelet[2134]: E1101 00:35:05.278913 2134 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:35:05.327172 kubelet[2134]: I1101 00:35:05.327139 2134 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:35:05.327434 kubelet[2134]: E1101 00:35:05.327398 2134 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Nov 1 00:35:05.367132 kubelet[2134]: W1101 00:35:05.367029 2134 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused Nov 1 00:35:05.367132 kubelet[2134]: E1101 00:35:05.367081 2134 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:35:05.466971 kubelet[2134]: E1101 00:35:05.466933 2134 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="1.6s" Nov 1 00:35:05.553638 kubelet[2134]: W1101 00:35:05.553561 2134 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused Nov 1 00:35:05.553751 kubelet[2134]: E1101 00:35:05.553641 2134 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:35:05.558575 kubelet[2134]: W1101 00:35:05.558524 2134 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused Nov 1 00:35:05.558669 kubelet[2134]: E1101 00:35:05.558562 2134 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:35:06.053048 kubelet[2134]: E1101 00:35:06.052994 2134 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:35:06.128489 kubelet[2134]: I1101 00:35:06.128461 2134 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:35:06.128821 kubelet[2134]: E1101 00:35:06.128779 2134 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Nov 1 00:35:06.331537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1391367211.mount: Deactivated successfully. Nov 1 00:35:06.336088 containerd[1468]: time="2025-11-01T00:35:06.336029776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:35:06.337688 containerd[1468]: time="2025-11-01T00:35:06.337614039Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:35:06.338711 containerd[1468]: time="2025-11-01T00:35:06.338661624Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:35:06.339553 containerd[1468]: time="2025-11-01T00:35:06.339515454Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:35:06.340442 containerd[1468]: time="2025-11-01T00:35:06.340398983Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:35:06.341124 containerd[1468]: time="2025-11-01T00:35:06.341085283Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:35:06.342007 containerd[1468]: time="2025-11-01T00:35:06.341952876Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 1 00:35:06.343570 containerd[1468]: time="2025-11-01T00:35:06.343528175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:35:06.345343 containerd[1468]: time="2025-11-01T00:35:06.345307885Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.223753795s" Nov 1 00:35:06.345963 containerd[1468]: time="2025-11-01T00:35:06.345932350Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.246459567s" Nov 1 00:35:06.348314 containerd[1468]: time="2025-11-01T00:35:06.348289909Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.245313314s" Nov 1 00:35:06.497105 containerd[1468]: time="2025-11-01T00:35:06.496759941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:35:06.497105 containerd[1468]: time="2025-11-01T00:35:06.496831441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:35:06.497105 containerd[1468]: time="2025-11-01T00:35:06.496845404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:06.497105 containerd[1468]: time="2025-11-01T00:35:06.496944870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:06.497916 containerd[1468]: time="2025-11-01T00:35:06.497243721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:35:06.497916 containerd[1468]: time="2025-11-01T00:35:06.497285361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:35:06.497916 containerd[1468]: time="2025-11-01T00:35:06.497300907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:06.497916 containerd[1468]: time="2025-11-01T00:35:06.497368760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:06.497916 containerd[1468]: time="2025-11-01T00:35:06.497215474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:35:06.497916 containerd[1468]: time="2025-11-01T00:35:06.497268412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:35:06.497916 containerd[1468]: time="2025-11-01T00:35:06.497281224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:06.497916 containerd[1468]: time="2025-11-01T00:35:06.497359094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:06.522737 systemd[1]: Started cri-containerd-312888bc8e1f8c168e166ba588bdb062ab1a4196e24645861904b15720b4390f.scope - libcontainer container 312888bc8e1f8c168e166ba588bdb062ab1a4196e24645861904b15720b4390f. Nov 1 00:35:06.524791 systemd[1]: Started cri-containerd-49826906a916a6c9afbce07df664ce015f36767a612d2fc301509711b8b25703.scope - libcontainer container 49826906a916a6c9afbce07df664ce015f36767a612d2fc301509711b8b25703. Nov 1 00:35:06.528155 systemd[1]: Started cri-containerd-c143fb3bb6127e70822b3143a7fb6f892d0724fedb6bf65deec07016a876920e.scope - libcontainer container c143fb3bb6127e70822b3143a7fb6f892d0724fedb6bf65deec07016a876920e. Nov 1 00:35:06.564920 containerd[1468]: time="2025-11-01T00:35:06.564876560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"312888bc8e1f8c168e166ba588bdb062ab1a4196e24645861904b15720b4390f\"" Nov 1 00:35:06.566219 kubelet[2134]: E1101 00:35:06.566185 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:06.568984 containerd[1468]: time="2025-11-01T00:35:06.568956113Z" level=info msg="CreateContainer within sandbox \"312888bc8e1f8c168e166ba588bdb062ab1a4196e24645861904b15720b4390f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:35:06.572710 containerd[1468]: time="2025-11-01T00:35:06.572636305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:639c64a68c7aeb5708b93264671df53e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c143fb3bb6127e70822b3143a7fb6f892d0724fedb6bf65deec07016a876920e\"" Nov 1 00:35:06.573175 kubelet[2134]: E1101 00:35:06.573156 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:06.575619 containerd[1468]: time="2025-11-01T00:35:06.574816006Z" level=info msg="CreateContainer within sandbox \"c143fb3bb6127e70822b3143a7fb6f892d0724fedb6bf65deec07016a876920e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:35:06.576108 containerd[1468]: time="2025-11-01T00:35:06.576067322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"49826906a916a6c9afbce07df664ce015f36767a612d2fc301509711b8b25703\"" Nov 1 00:35:06.576733 kubelet[2134]: E1101 00:35:06.576714 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:06.578760 containerd[1468]: time="2025-11-01T00:35:06.578728679Z" level=info msg="CreateContainer within sandbox \"49826906a916a6c9afbce07df664ce015f36767a612d2fc301509711b8b25703\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:35:06.592851 containerd[1468]: time="2025-11-01T00:35:06.592772089Z" level=info msg="CreateContainer within sandbox \"312888bc8e1f8c168e166ba588bdb062ab1a4196e24645861904b15720b4390f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ee513d8c86a32c68b850b6db8450da6cc3f0f08b52fd28e3708bed341915ac1d\"" Nov 1 00:35:06.593289 containerd[1468]: time="2025-11-01T00:35:06.593257021Z" level=info msg="StartContainer for \"ee513d8c86a32c68b850b6db8450da6cc3f0f08b52fd28e3708bed341915ac1d\"" Nov 1 00:35:06.596409 containerd[1468]: time="2025-11-01T00:35:06.596373329Z" level=info msg="CreateContainer within sandbox \"c143fb3bb6127e70822b3143a7fb6f892d0724fedb6bf65deec07016a876920e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"75f9939b38024c1b26035eed6222a57d31abe06905d73bf68f53f90aadef50bb\"" Nov 1 00:35:06.597737 containerd[1468]: time="2025-11-01T00:35:06.596785059Z" level=info msg="StartContainer for \"75f9939b38024c1b26035eed6222a57d31abe06905d73bf68f53f90aadef50bb\"" Nov 1 00:35:06.600340 containerd[1468]: time="2025-11-01T00:35:06.600304193Z" level=info msg="CreateContainer within sandbox \"49826906a916a6c9afbce07df664ce015f36767a612d2fc301509711b8b25703\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0f9df20604201b2b0299b51b0e2e9377abc93ec1b5141b0a7ec8eb76a896c603\"" Nov 1 00:35:06.600909 containerd[1468]: time="2025-11-01T00:35:06.600883953Z" level=info msg="StartContainer for \"0f9df20604201b2b0299b51b0e2e9377abc93ec1b5141b0a7ec8eb76a896c603\"" Nov 1 00:35:06.619167 systemd[1]: Started cri-containerd-ee513d8c86a32c68b850b6db8450da6cc3f0f08b52fd28e3708bed341915ac1d.scope - libcontainer container ee513d8c86a32c68b850b6db8450da6cc3f0f08b52fd28e3708bed341915ac1d. Nov 1 00:35:06.631730 systemd[1]: Started cri-containerd-75f9939b38024c1b26035eed6222a57d31abe06905d73bf68f53f90aadef50bb.scope - libcontainer container 75f9939b38024c1b26035eed6222a57d31abe06905d73bf68f53f90aadef50bb. Nov 1 00:35:06.634340 systemd[1]: Started cri-containerd-0f9df20604201b2b0299b51b0e2e9377abc93ec1b5141b0a7ec8eb76a896c603.scope - libcontainer container 0f9df20604201b2b0299b51b0e2e9377abc93ec1b5141b0a7ec8eb76a896c603. Nov 1 00:35:06.672174 containerd[1468]: time="2025-11-01T00:35:06.670660683Z" level=info msg="StartContainer for \"ee513d8c86a32c68b850b6db8450da6cc3f0f08b52fd28e3708bed341915ac1d\" returns successfully" Nov 1 00:35:06.674079 containerd[1468]: time="2025-11-01T00:35:06.674049408Z" level=info msg="StartContainer for \"75f9939b38024c1b26035eed6222a57d31abe06905d73bf68f53f90aadef50bb\" returns successfully" Nov 1 00:35:06.686424 containerd[1468]: time="2025-11-01T00:35:06.686391678Z" level=info msg="StartContainer for \"0f9df20604201b2b0299b51b0e2e9377abc93ec1b5141b0a7ec8eb76a896c603\" returns successfully" Nov 1 00:35:07.096723 kubelet[2134]: E1101 00:35:07.096703 2134 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:35:07.097296 kubelet[2134]: E1101 00:35:07.097268 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:07.099389 kubelet[2134]: E1101 00:35:07.099363 2134 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:35:07.099483 kubelet[2134]: E1101 00:35:07.099462 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:07.100330 kubelet[2134]: E1101 00:35:07.100309 2134 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:35:07.100417 kubelet[2134]: E1101 00:35:07.100396 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:07.479115 kubelet[2134]: E1101 00:35:07.478799 2134 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 1 00:35:07.622501 kubelet[2134]: E1101 00:35:07.622385 2134 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1873bad531e4195f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 00:35:04.059828575 +0000 UTC m=+0.582574167,LastTimestamp:2025-11-01 00:35:04.059828575 +0000 UTC m=+0.582574167,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 00:35:07.730780 kubelet[2134]: I1101 00:35:07.730513 2134 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:35:07.735146 kubelet[2134]: I1101 00:35:07.735121 2134 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:35:07.764253 kubelet[2134]: I1101 00:35:07.764215 2134 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:35:07.767433 kubelet[2134]: E1101 00:35:07.767407 2134 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 00:35:07.767433 kubelet[2134]: I1101 00:35:07.767427 2134 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:35:07.768504 kubelet[2134]: E1101 00:35:07.768484 2134 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:35:07.768504 kubelet[2134]: I1101 00:35:07.768501 2134 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:35:07.769438 kubelet[2134]: E1101 00:35:07.769417 2134 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 00:35:08.056576 kubelet[2134]: I1101 00:35:08.056465 2134 apiserver.go:52] "Watching apiserver" Nov 1 00:35:08.064220 kubelet[2134]: I1101 00:35:08.064200 2134 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:35:08.101353 kubelet[2134]: I1101 00:35:08.101328 2134 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:35:08.101658 kubelet[2134]: I1101 00:35:08.101482 2134 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:35:08.101658 kubelet[2134]: I1101 00:35:08.101617 2134 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:35:08.102894 kubelet[2134]: E1101 00:35:08.102866 2134 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 00:35:08.103032 kubelet[2134]: E1101 00:35:08.102986 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:08.103466 kubelet[2134]: E1101 00:35:08.103445 2134 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 00:35:08.103548 kubelet[2134]: E1101 00:35:08.103534 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:08.103863 kubelet[2134]: E1101 00:35:08.103840 2134 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:35:08.104018 kubelet[2134]: E1101 00:35:08.103981 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:09.102787 kubelet[2134]: I1101 00:35:09.102718 2134 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:35:09.106204 kubelet[2134]: E1101 00:35:09.106184 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:09.409687 systemd[1]: Reloading requested from client PID 2415 ('systemctl') (unit session-7.scope)... Nov 1 00:35:09.409704 systemd[1]: Reloading... Nov 1 00:35:09.489391 zram_generator::config[2457]: No configuration found. Nov 1 00:35:09.593482 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:35:09.681690 systemd[1]: Reloading finished in 271 ms. Nov 1 00:35:09.723772 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:35:09.747004 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:35:09.747285 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:35:09.747336 systemd[1]: kubelet.service: Consumed 1.045s CPU time, 135.9M memory peak, 0B memory swap peak. Nov 1 00:35:09.758783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:35:09.916832 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:35:09.921321 (kubelet)[2499]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:35:09.963199 kubelet[2499]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:35:09.963199 kubelet[2499]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:35:09.963199 kubelet[2499]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:35:09.963546 kubelet[2499]: I1101 00:35:09.963225 2499 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:35:09.970096 kubelet[2499]: I1101 00:35:09.970053 2499 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:35:09.970096 kubelet[2499]: I1101 00:35:09.970078 2499 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:35:09.970315 kubelet[2499]: I1101 00:35:09.970291 2499 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:35:09.971435 kubelet[2499]: I1101 00:35:09.971410 2499 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:35:09.973585 kubelet[2499]: I1101 00:35:09.973562 2499 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:35:09.978024 kubelet[2499]: E1101 00:35:09.977985 2499 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:35:09.978076 kubelet[2499]: I1101 00:35:09.978027 2499 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:35:09.982756 kubelet[2499]: I1101 00:35:09.982724 2499 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:35:09.983019 kubelet[2499]: I1101 00:35:09.982987 2499 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:35:09.983170 kubelet[2499]: I1101 00:35:09.983014 2499 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:35:09.983238 kubelet[2499]: I1101 00:35:09.983183 2499 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:35:09.983238 kubelet[2499]: I1101 00:35:09.983192 2499 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:35:09.983285 kubelet[2499]: I1101 00:35:09.983244 2499 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:35:09.983399 kubelet[2499]: I1101 00:35:09.983389 2499 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:35:09.983420 kubelet[2499]: I1101 00:35:09.983411 2499 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:35:09.983443 kubelet[2499]: I1101 00:35:09.983427 2499 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:35:09.983443 kubelet[2499]: I1101 00:35:09.983440 2499 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:35:09.984030 kubelet[2499]: I1101 00:35:09.984007 2499 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:35:09.986366 kubelet[2499]: I1101 00:35:09.984409 2499 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:35:09.986366 kubelet[2499]: I1101 00:35:09.984852 2499 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:35:09.986366 kubelet[2499]: I1101 00:35:09.984875 2499 server.go:1287] "Started kubelet" Nov 1 00:35:09.986366 kubelet[2499]: I1101 00:35:09.985124 2499 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:35:09.986366 kubelet[2499]: I1101 00:35:09.985246 2499 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:35:09.986366 kubelet[2499]: I1101 00:35:09.985491 2499 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:35:09.986366 kubelet[2499]: I1101 00:35:09.986049 2499 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:35:09.988213 kubelet[2499]: I1101 00:35:09.988189 2499 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:35:09.989024 kubelet[2499]: I1101 00:35:09.988998 2499 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:35:09.989783 kubelet[2499]: E1101 00:35:09.989755 2499 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:35:09.989920 kubelet[2499]: E1101 00:35:09.989869 2499 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:35:09.989920 kubelet[2499]: I1101 00:35:09.989920 2499 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:35:09.990148 kubelet[2499]: I1101 00:35:09.990126 2499 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:35:09.990421 kubelet[2499]: I1101 00:35:09.990401 2499 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:35:09.990563 kubelet[2499]: I1101 00:35:09.990536 2499 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:35:09.990653 kubelet[2499]: I1101 00:35:09.990628 2499 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:35:09.997658 kubelet[2499]: I1101 00:35:09.997629 2499 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:35:10.010232 kubelet[2499]: I1101 00:35:10.008998 2499 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:35:10.010710 kubelet[2499]: I1101 00:35:10.010687 2499 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:35:10.010887 kubelet[2499]: I1101 00:35:10.010832 2499 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:35:10.010887 kubelet[2499]: I1101 00:35:10.010855 2499 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:35:10.010887 kubelet[2499]: I1101 00:35:10.010865 2499 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:35:10.011386 kubelet[2499]: E1101 00:35:10.011016 2499 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:35:10.030471 kubelet[2499]: I1101 00:35:10.030443 2499 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:35:10.030471 kubelet[2499]: I1101 00:35:10.030463 2499 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:35:10.030585 kubelet[2499]: I1101 00:35:10.030480 2499 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:35:10.030642 kubelet[2499]: I1101 00:35:10.030622 2499 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:35:10.030672 kubelet[2499]: I1101 00:35:10.030638 2499 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:35:10.030672 kubelet[2499]: I1101 00:35:10.030658 2499 policy_none.go:49] "None policy: Start" Nov 1 00:35:10.030672 kubelet[2499]: I1101 00:35:10.030666 2499 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:35:10.030731 kubelet[2499]: I1101 00:35:10.030677 2499 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:35:10.030965 kubelet[2499]: I1101 00:35:10.030940 2499 state_mem.go:75] "Updated machine memory state" Nov 1 00:35:10.034853 kubelet[2499]: I1101 00:35:10.034822 2499 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:35:10.035004 kubelet[2499]: I1101 00:35:10.034990 2499 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:35:10.035065 kubelet[2499]: I1101 00:35:10.035003 2499 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:35:10.035521 kubelet[2499]: I1101 00:35:10.035453 2499 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:35:10.035799 kubelet[2499]: E1101 00:35:10.035779 2499 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:35:10.111713 kubelet[2499]: I1101 00:35:10.111669 2499 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:35:10.111853 kubelet[2499]: I1101 00:35:10.111774 2499 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:35:10.111853 kubelet[2499]: I1101 00:35:10.111691 2499 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:35:10.117504 kubelet[2499]: E1101 00:35:10.117480 2499 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 00:35:10.139460 kubelet[2499]: I1101 00:35:10.139432 2499 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:35:10.143801 kubelet[2499]: I1101 00:35:10.143775 2499 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 1 00:35:10.143853 kubelet[2499]: I1101 00:35:10.143838 2499 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:35:10.191816 kubelet[2499]: I1101 00:35:10.191768 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/639c64a68c7aeb5708b93264671df53e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"639c64a68c7aeb5708b93264671df53e\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:35:10.191816 kubelet[2499]: I1101 00:35:10.191806 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/639c64a68c7aeb5708b93264671df53e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"639c64a68c7aeb5708b93264671df53e\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:35:10.191978 kubelet[2499]: I1101 00:35:10.191831 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:35:10.191978 kubelet[2499]: I1101 00:35:10.191850 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:35:10.191978 kubelet[2499]: I1101 00:35:10.191874 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:35:10.191978 kubelet[2499]: I1101 00:35:10.191889 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/639c64a68c7aeb5708b93264671df53e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"639c64a68c7aeb5708b93264671df53e\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:35:10.191978 kubelet[2499]: I1101 00:35:10.191923 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:35:10.192091 kubelet[2499]: I1101 00:35:10.191938 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:35:10.192091 kubelet[2499]: I1101 00:35:10.191992 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:35:10.418307 kubelet[2499]: E1101 00:35:10.417522 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:10.418307 kubelet[2499]: E1101 00:35:10.417534 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:10.418307 kubelet[2499]: E1101 00:35:10.417633 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:10.984104 kubelet[2499]: I1101 00:35:10.984062 2499 apiserver.go:52] "Watching apiserver" Nov 1 00:35:10.991209 kubelet[2499]: I1101 00:35:10.990480 2499 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:35:11.020241 kubelet[2499]: I1101 00:35:11.019945 2499 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:35:11.020241 kubelet[2499]: E1101 00:35:11.020087 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:11.020241 kubelet[2499]: E1101 00:35:11.020176 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:11.023980 kubelet[2499]: E1101 00:35:11.023950 2499 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 00:35:11.024117 kubelet[2499]: E1101 00:35:11.024053 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:11.036913 kubelet[2499]: I1101 00:35:11.036860 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.03685066 podStartE2EDuration="1.03685066s" podCreationTimestamp="2025-11-01 00:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:35:11.036698456 +0000 UTC m=+1.111308669" watchObservedRunningTime="2025-11-01 00:35:11.03685066 +0000 UTC m=+1.111460873" Nov 1 00:35:11.046268 kubelet[2499]: I1101 00:35:11.046208 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.046188915 podStartE2EDuration="1.046188915s" podCreationTimestamp="2025-11-01 00:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:35:11.04600066 +0000 UTC m=+1.120610873" watchObservedRunningTime="2025-11-01 00:35:11.046188915 +0000 UTC m=+1.120799128" Nov 1 00:35:11.057437 kubelet[2499]: I1101 00:35:11.057395 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.057385034 podStartE2EDuration="2.057385034s" podCreationTimestamp="2025-11-01 00:35:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:35:11.051623136 +0000 UTC m=+1.126233349" watchObservedRunningTime="2025-11-01 00:35:11.057385034 +0000 UTC m=+1.131995247" Nov 1 00:35:12.020996 kubelet[2499]: E1101 00:35:12.020959 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:12.021488 kubelet[2499]: E1101 00:35:12.021025 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:14.640957 kubelet[2499]: E1101 00:35:14.640919 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:15.250961 kubelet[2499]: I1101 00:35:15.250935 2499 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:35:15.251324 containerd[1468]: time="2025-11-01T00:35:15.251284898Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:35:15.251672 kubelet[2499]: I1101 00:35:15.251492 2499 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:35:15.885833 systemd[1]: Created slice kubepods-besteffort-pod4686458d_990e_4aac_944a_ed0cc56551da.slice - libcontainer container kubepods-besteffort-pod4686458d_990e_4aac_944a_ed0cc56551da.slice. Nov 1 00:35:15.929952 kubelet[2499]: I1101 00:35:15.929904 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4686458d-990e-4aac-944a-ed0cc56551da-xtables-lock\") pod \"kube-proxy-986x8\" (UID: \"4686458d-990e-4aac-944a-ed0cc56551da\") " pod="kube-system/kube-proxy-986x8" Nov 1 00:35:15.929952 kubelet[2499]: I1101 00:35:15.929948 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4686458d-990e-4aac-944a-ed0cc56551da-kube-proxy\") pod \"kube-proxy-986x8\" (UID: \"4686458d-990e-4aac-944a-ed0cc56551da\") " pod="kube-system/kube-proxy-986x8" Nov 1 00:35:15.929952 kubelet[2499]: I1101 00:35:15.929968 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4686458d-990e-4aac-944a-ed0cc56551da-lib-modules\") pod \"kube-proxy-986x8\" (UID: \"4686458d-990e-4aac-944a-ed0cc56551da\") " pod="kube-system/kube-proxy-986x8" Nov 1 00:35:15.930365 kubelet[2499]: I1101 00:35:15.929985 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ps4t\" (UniqueName: \"kubernetes.io/projected/4686458d-990e-4aac-944a-ed0cc56551da-kube-api-access-8ps4t\") pod \"kube-proxy-986x8\" (UID: \"4686458d-990e-4aac-944a-ed0cc56551da\") " pod="kube-system/kube-proxy-986x8" Nov 1 00:35:16.194903 kubelet[2499]: E1101 00:35:16.194840 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:16.195506 containerd[1468]: time="2025-11-01T00:35:16.195455829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-986x8,Uid:4686458d-990e-4aac-944a-ed0cc56551da,Namespace:kube-system,Attempt:0,}" Nov 1 00:35:16.241142 containerd[1468]: time="2025-11-01T00:35:16.240687418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:35:16.241142 containerd[1468]: time="2025-11-01T00:35:16.240766508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:35:16.241142 containerd[1468]: time="2025-11-01T00:35:16.240809474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:16.241142 containerd[1468]: time="2025-11-01T00:35:16.241063543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:16.268743 systemd[1]: Started cri-containerd-4eecae535325231aeff7acbc667325097c4ef712cd5f28b9156f3dd6f7e1b678.scope - libcontainer container 4eecae535325231aeff7acbc667325097c4ef712cd5f28b9156f3dd6f7e1b678. Nov 1 00:35:16.269651 systemd[1]: Created slice kubepods-besteffort-pod2d876009_7cd7_43b3_85c1_2033ab55df64.slice - libcontainer container kubepods-besteffort-pod2d876009_7cd7_43b3_85c1_2033ab55df64.slice. Nov 1 00:35:16.291897 containerd[1468]: time="2025-11-01T00:35:16.291848483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-986x8,Uid:4686458d-990e-4aac-944a-ed0cc56551da,Namespace:kube-system,Attempt:0,} returns sandbox id \"4eecae535325231aeff7acbc667325097c4ef712cd5f28b9156f3dd6f7e1b678\"" Nov 1 00:35:16.292394 kubelet[2499]: E1101 00:35:16.292370 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:16.294280 containerd[1468]: time="2025-11-01T00:35:16.294249082Z" level=info msg="CreateContainer within sandbox \"4eecae535325231aeff7acbc667325097c4ef712cd5f28b9156f3dd6f7e1b678\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:35:16.308883 containerd[1468]: time="2025-11-01T00:35:16.308838715Z" level=info msg="CreateContainer within sandbox \"4eecae535325231aeff7acbc667325097c4ef712cd5f28b9156f3dd6f7e1b678\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"648f7e05b626c0c0b6f9240bcd087ee4015512280fe9f01824d96f2156756488\"" Nov 1 00:35:16.309325 containerd[1468]: time="2025-11-01T00:35:16.309302195Z" level=info msg="StartContainer for \"648f7e05b626c0c0b6f9240bcd087ee4015512280fe9f01824d96f2156756488\"" Nov 1 00:35:16.333041 kubelet[2499]: I1101 00:35:16.332963 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znzq4\" (UniqueName: \"kubernetes.io/projected/2d876009-7cd7-43b3-85c1-2033ab55df64-kube-api-access-znzq4\") pod \"tigera-operator-7dcd859c48-slhwv\" (UID: \"2d876009-7cd7-43b3-85c1-2033ab55df64\") " pod="tigera-operator/tigera-operator-7dcd859c48-slhwv" Nov 1 00:35:16.333041 kubelet[2499]: I1101 00:35:16.332993 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2d876009-7cd7-43b3-85c1-2033ab55df64-var-lib-calico\") pod \"tigera-operator-7dcd859c48-slhwv\" (UID: \"2d876009-7cd7-43b3-85c1-2033ab55df64\") " pod="tigera-operator/tigera-operator-7dcd859c48-slhwv" Nov 1 00:35:16.335718 systemd[1]: Started cri-containerd-648f7e05b626c0c0b6f9240bcd087ee4015512280fe9f01824d96f2156756488.scope - libcontainer container 648f7e05b626c0c0b6f9240bcd087ee4015512280fe9f01824d96f2156756488. Nov 1 00:35:16.363666 containerd[1468]: time="2025-11-01T00:35:16.363630737Z" level=info msg="StartContainer for \"648f7e05b626c0c0b6f9240bcd087ee4015512280fe9f01824d96f2156756488\" returns successfully" Nov 1 00:35:16.574417 containerd[1468]: time="2025-11-01T00:35:16.574315397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-slhwv,Uid:2d876009-7cd7-43b3-85c1-2033ab55df64,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:35:16.595976 containerd[1468]: time="2025-11-01T00:35:16.595864268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:35:16.596611 containerd[1468]: time="2025-11-01T00:35:16.596399105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:35:16.596611 containerd[1468]: time="2025-11-01T00:35:16.596419331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:16.596611 containerd[1468]: time="2025-11-01T00:35:16.596537259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:16.613728 systemd[1]: Started cri-containerd-2a129146c4ee5da866285ade41372b1b4b3a108d8e3ca4cbcaa856e2404fe3f8.scope - libcontainer container 2a129146c4ee5da866285ade41372b1b4b3a108d8e3ca4cbcaa856e2404fe3f8. Nov 1 00:35:16.653979 containerd[1468]: time="2025-11-01T00:35:16.653936106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-slhwv,Uid:2d876009-7cd7-43b3-85c1-2033ab55df64,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2a129146c4ee5da866285ade41372b1b4b3a108d8e3ca4cbcaa856e2404fe3f8\"" Nov 1 00:35:16.657355 containerd[1468]: time="2025-11-01T00:35:16.656760614Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:35:17.027917 kubelet[2499]: E1101 00:35:17.027879 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:17.036563 kubelet[2499]: I1101 00:35:17.036506 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-986x8" podStartSLOduration=2.036488024 podStartE2EDuration="2.036488024s" podCreationTimestamp="2025-11-01 00:35:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:35:17.036464874 +0000 UTC m=+7.111075087" watchObservedRunningTime="2025-11-01 00:35:17.036488024 +0000 UTC m=+7.111098237" Nov 1 00:35:17.043386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1779474282.mount: Deactivated successfully. Nov 1 00:35:17.240955 kubelet[2499]: E1101 00:35:17.240934 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:17.889386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4265229270.mount: Deactivated successfully. Nov 1 00:35:17.909021 kubelet[2499]: E1101 00:35:17.908988 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:18.028957 kubelet[2499]: E1101 00:35:18.028919 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:18.029388 kubelet[2499]: E1101 00:35:18.029370 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:18.220668 containerd[1468]: time="2025-11-01T00:35:18.220627319Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:35:18.221470 containerd[1468]: time="2025-11-01T00:35:18.221419972Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 00:35:18.222388 containerd[1468]: time="2025-11-01T00:35:18.222312372Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:35:18.224367 containerd[1468]: time="2025-11-01T00:35:18.224322025Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:35:18.225129 containerd[1468]: time="2025-11-01T00:35:18.225102957Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.568303704s" Nov 1 00:35:18.225171 containerd[1468]: time="2025-11-01T00:35:18.225135435Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:35:18.229367 containerd[1468]: time="2025-11-01T00:35:18.229326978Z" level=info msg="CreateContainer within sandbox \"2a129146c4ee5da866285ade41372b1b4b3a108d8e3ca4cbcaa856e2404fe3f8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:35:18.242798 containerd[1468]: time="2025-11-01T00:35:18.242754164Z" level=info msg="CreateContainer within sandbox \"2a129146c4ee5da866285ade41372b1b4b3a108d8e3ca4cbcaa856e2404fe3f8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dac163e61b2923b5725a5236b857c7ac8332152f63f3e5960b119f6793a8afde\"" Nov 1 00:35:18.243632 containerd[1468]: time="2025-11-01T00:35:18.243154037Z" level=info msg="StartContainer for \"dac163e61b2923b5725a5236b857c7ac8332152f63f3e5960b119f6793a8afde\"" Nov 1 00:35:18.270732 systemd[1]: Started cri-containerd-dac163e61b2923b5725a5236b857c7ac8332152f63f3e5960b119f6793a8afde.scope - libcontainer container dac163e61b2923b5725a5236b857c7ac8332152f63f3e5960b119f6793a8afde. Nov 1 00:35:18.293809 containerd[1468]: time="2025-11-01T00:35:18.293769740Z" level=info msg="StartContainer for \"dac163e61b2923b5725a5236b857c7ac8332152f63f3e5960b119f6793a8afde\" returns successfully" Nov 1 00:35:19.032313 kubelet[2499]: E1101 00:35:19.031218 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:23.223471 sudo[1647]: pam_unix(sudo:session): session closed for user root Nov 1 00:35:23.226745 sshd[1644]: pam_unix(sshd:session): session closed for user core Nov 1 00:35:23.233632 systemd[1]: sshd@6-10.0.0.5:22-10.0.0.1:50956.service: Deactivated successfully. Nov 1 00:35:23.238034 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:35:23.238780 systemd[1]: session-7.scope: Consumed 4.714s CPU time, 157.5M memory peak, 0B memory swap peak. Nov 1 00:35:23.247033 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:35:23.248898 systemd-logind[1455]: Removed session 7. Nov 1 00:35:24.645449 kubelet[2499]: E1101 00:35:24.645371 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:24.655069 kubelet[2499]: I1101 00:35:24.655018 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-slhwv" podStartSLOduration=7.083163151 podStartE2EDuration="8.655004896s" podCreationTimestamp="2025-11-01 00:35:16 +0000 UTC" firstStartedPulling="2025-11-01 00:35:16.656293759 +0000 UTC m=+6.730903972" lastFinishedPulling="2025-11-01 00:35:18.228135514 +0000 UTC m=+8.302745717" observedRunningTime="2025-11-01 00:35:19.038577345 +0000 UTC m=+9.113187558" watchObservedRunningTime="2025-11-01 00:35:24.655004896 +0000 UTC m=+14.729615109" Nov 1 00:35:26.796723 update_engine[1458]: I20251101 00:35:26.796643 1458 update_attempter.cc:509] Updating boot flags... Nov 1 00:35:27.072859 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2919) Nov 1 00:35:27.118807 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2922) Nov 1 00:35:27.154654 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2922) Nov 1 00:35:27.347775 systemd[1]: Created slice kubepods-besteffort-pod17799e35_447a_41b0_bb71_cfd3d8c3b479.slice - libcontainer container kubepods-besteffort-pod17799e35_447a_41b0_bb71_cfd3d8c3b479.slice. Nov 1 00:35:27.503380 kubelet[2499]: I1101 00:35:27.503234 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/17799e35-447a-41b0-bb71-cfd3d8c3b479-typha-certs\") pod \"calico-typha-6bb78b4bc7-kdfkf\" (UID: \"17799e35-447a-41b0-bb71-cfd3d8c3b479\") " pod="calico-system/calico-typha-6bb78b4bc7-kdfkf" Nov 1 00:35:27.504016 kubelet[2499]: I1101 00:35:27.503922 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17799e35-447a-41b0-bb71-cfd3d8c3b479-tigera-ca-bundle\") pod \"calico-typha-6bb78b4bc7-kdfkf\" (UID: \"17799e35-447a-41b0-bb71-cfd3d8c3b479\") " pod="calico-system/calico-typha-6bb78b4bc7-kdfkf" Nov 1 00:35:27.504016 kubelet[2499]: I1101 00:35:27.503970 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gglgl\" (UniqueName: \"kubernetes.io/projected/17799e35-447a-41b0-bb71-cfd3d8c3b479-kube-api-access-gglgl\") pod \"calico-typha-6bb78b4bc7-kdfkf\" (UID: \"17799e35-447a-41b0-bb71-cfd3d8c3b479\") " pod="calico-system/calico-typha-6bb78b4bc7-kdfkf" Nov 1 00:35:27.530623 systemd[1]: Created slice kubepods-besteffort-pod7d0e37b5_6436_4e09_8516_2e963d120c1b.slice - libcontainer container kubepods-besteffort-pod7d0e37b5_6436_4e09_8516_2e963d120c1b.slice. Nov 1 00:35:27.604444 kubelet[2499]: I1101 00:35:27.604407 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7d0e37b5-6436-4e09-8516-2e963d120c1b-var-lib-calico\") pod \"calico-node-jx6wz\" (UID: \"7d0e37b5-6436-4e09-8516-2e963d120c1b\") " pod="calico-system/calico-node-jx6wz" Nov 1 00:35:27.604444 kubelet[2499]: I1101 00:35:27.604444 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d0e37b5-6436-4e09-8516-2e963d120c1b-tigera-ca-bundle\") pod \"calico-node-jx6wz\" (UID: \"7d0e37b5-6436-4e09-8516-2e963d120c1b\") " pod="calico-system/calico-node-jx6wz" Nov 1 00:35:27.604444 kubelet[2499]: I1101 00:35:27.604460 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7d0e37b5-6436-4e09-8516-2e963d120c1b-var-run-calico\") pod \"calico-node-jx6wz\" (UID: \"7d0e37b5-6436-4e09-8516-2e963d120c1b\") " pod="calico-system/calico-node-jx6wz" Nov 1 00:35:27.604660 kubelet[2499]: I1101 00:35:27.604486 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7d0e37b5-6436-4e09-8516-2e963d120c1b-cni-bin-dir\") pod \"calico-node-jx6wz\" (UID: \"7d0e37b5-6436-4e09-8516-2e963d120c1b\") " pod="calico-system/calico-node-jx6wz" Nov 1 00:35:27.604660 kubelet[2499]: I1101 00:35:27.604504 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7d0e37b5-6436-4e09-8516-2e963d120c1b-node-certs\") pod \"calico-node-jx6wz\" (UID: \"7d0e37b5-6436-4e09-8516-2e963d120c1b\") " pod="calico-system/calico-node-jx6wz" Nov 1 00:35:27.604660 kubelet[2499]: I1101 00:35:27.604518 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d0e37b5-6436-4e09-8516-2e963d120c1b-xtables-lock\") pod \"calico-node-jx6wz\" (UID: \"7d0e37b5-6436-4e09-8516-2e963d120c1b\") " pod="calico-system/calico-node-jx6wz" Nov 1 00:35:27.604660 kubelet[2499]: I1101 00:35:27.604560 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7d0e37b5-6436-4e09-8516-2e963d120c1b-cni-net-dir\") pod \"calico-node-jx6wz\" (UID: \"7d0e37b5-6436-4e09-8516-2e963d120c1b\") " pod="calico-system/calico-node-jx6wz" Nov 1 00:35:27.604660 kubelet[2499]: I1101 00:35:27.604578 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7d0e37b5-6436-4e09-8516-2e963d120c1b-cni-log-dir\") pod \"calico-node-jx6wz\" (UID: \"7d0e37b5-6436-4e09-8516-2e963d120c1b\") " pod="calico-system/calico-node-jx6wz" Nov 1 00:35:27.604787 kubelet[2499]: I1101 00:35:27.604608 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f74sl\" (UniqueName: \"kubernetes.io/projected/7d0e37b5-6436-4e09-8516-2e963d120c1b-kube-api-access-f74sl\") pod \"calico-node-jx6wz\" (UID: \"7d0e37b5-6436-4e09-8516-2e963d120c1b\") " pod="calico-system/calico-node-jx6wz" Nov 1 00:35:27.604787 kubelet[2499]: I1101 00:35:27.604629 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d0e37b5-6436-4e09-8516-2e963d120c1b-lib-modules\") pod \"calico-node-jx6wz\" (UID: \"7d0e37b5-6436-4e09-8516-2e963d120c1b\") " pod="calico-system/calico-node-jx6wz" Nov 1 00:35:27.604787 kubelet[2499]: I1101 00:35:27.604644 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7d0e37b5-6436-4e09-8516-2e963d120c1b-policysync\") pod \"calico-node-jx6wz\" (UID: \"7d0e37b5-6436-4e09-8516-2e963d120c1b\") " pod="calico-system/calico-node-jx6wz" Nov 1 00:35:27.604787 kubelet[2499]: I1101 00:35:27.604676 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7d0e37b5-6436-4e09-8516-2e963d120c1b-flexvol-driver-host\") pod \"calico-node-jx6wz\" (UID: \"7d0e37b5-6436-4e09-8516-2e963d120c1b\") " pod="calico-system/calico-node-jx6wz" Nov 1 00:35:27.651749 kubelet[2499]: E1101 00:35:27.651723 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:27.652050 containerd[1468]: time="2025-11-01T00:35:27.652012808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bb78b4bc7-kdfkf,Uid:17799e35-447a-41b0-bb71-cfd3d8c3b479,Namespace:calico-system,Attempt:0,}" Nov 1 00:35:27.675513 containerd[1468]: time="2025-11-01T00:35:27.674963937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:35:27.675513 containerd[1468]: time="2025-11-01T00:35:27.675487281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:35:27.675513 containerd[1468]: time="2025-11-01T00:35:27.675501437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:27.675689 containerd[1468]: time="2025-11-01T00:35:27.675582464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:27.694734 systemd[1]: Started cri-containerd-69969274ba416a6f20625e61aadb5092de0cd241d658671a71b4e65605e52511.scope - libcontainer container 69969274ba416a6f20625e61aadb5092de0cd241d658671a71b4e65605e52511. Nov 1 00:35:27.714822 kubelet[2499]: E1101 00:35:27.714785 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.714822 kubelet[2499]: W1101 00:35:27.714811 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.715383 kubelet[2499]: E1101 00:35:27.714842 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.717723 kubelet[2499]: E1101 00:35:27.717685 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.717723 kubelet[2499]: W1101 00:35:27.717718 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.717895 kubelet[2499]: E1101 00:35:27.717731 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.722791 kubelet[2499]: E1101 00:35:27.722775 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.723067 kubelet[2499]: W1101 00:35:27.722989 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.723067 kubelet[2499]: E1101 00:35:27.723009 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.732930 kubelet[2499]: E1101 00:35:27.732885 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jzfns" podUID="31c28b53-e76c-45d5-b66c-cb1d82d504b6" Nov 1 00:35:27.746090 containerd[1468]: time="2025-11-01T00:35:27.746050073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bb78b4bc7-kdfkf,Uid:17799e35-447a-41b0-bb71-cfd3d8c3b479,Namespace:calico-system,Attempt:0,} returns sandbox id \"69969274ba416a6f20625e61aadb5092de0cd241d658671a71b4e65605e52511\"" Nov 1 00:35:27.746699 kubelet[2499]: E1101 00:35:27.746665 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:27.747620 containerd[1468]: time="2025-11-01T00:35:27.747398792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:35:27.804886 kubelet[2499]: E1101 00:35:27.804689 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.804886 kubelet[2499]: W1101 00:35:27.804708 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.804886 kubelet[2499]: E1101 00:35:27.804729 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.805080 kubelet[2499]: E1101 00:35:27.805067 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.805231 kubelet[2499]: W1101 00:35:27.805142 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.805231 kubelet[2499]: E1101 00:35:27.805159 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.805448 kubelet[2499]: E1101 00:35:27.805386 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.805448 kubelet[2499]: W1101 00:35:27.805399 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.805448 kubelet[2499]: E1101 00:35:27.805408 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.811241 kubelet[2499]: E1101 00:35:27.811213 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.811241 kubelet[2499]: W1101 00:35:27.811229 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.811241 kubelet[2499]: E1101 00:35:27.811241 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.811574 kubelet[2499]: E1101 00:35:27.811456 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.811574 kubelet[2499]: W1101 00:35:27.811466 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.811574 kubelet[2499]: E1101 00:35:27.811474 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.811875 kubelet[2499]: E1101 00:35:27.811757 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.811875 kubelet[2499]: W1101 00:35:27.811770 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.811875 kubelet[2499]: E1101 00:35:27.811779 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.812025 kubelet[2499]: E1101 00:35:27.812000 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.812064 kubelet[2499]: W1101 00:35:27.812024 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.812064 kubelet[2499]: E1101 00:35:27.812047 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.812290 kubelet[2499]: E1101 00:35:27.812277 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.812290 kubelet[2499]: W1101 00:35:27.812287 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.812356 kubelet[2499]: E1101 00:35:27.812296 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.812578 kubelet[2499]: E1101 00:35:27.812550 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.812578 kubelet[2499]: W1101 00:35:27.812574 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.812705 kubelet[2499]: E1101 00:35:27.812616 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.812875 kubelet[2499]: E1101 00:35:27.812858 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.812902 kubelet[2499]: W1101 00:35:27.812879 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.812902 kubelet[2499]: E1101 00:35:27.812888 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.813087 kubelet[2499]: E1101 00:35:27.813074 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.813087 kubelet[2499]: W1101 00:35:27.813084 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.813136 kubelet[2499]: E1101 00:35:27.813093 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.813305 kubelet[2499]: E1101 00:35:27.813291 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.813305 kubelet[2499]: W1101 00:35:27.813301 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.813361 kubelet[2499]: E1101 00:35:27.813309 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.813685 kubelet[2499]: E1101 00:35:27.813553 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.813685 kubelet[2499]: W1101 00:35:27.813566 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.813685 kubelet[2499]: E1101 00:35:27.813577 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.814746 kubelet[2499]: E1101 00:35:27.813873 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.814746 kubelet[2499]: W1101 00:35:27.813898 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.814746 kubelet[2499]: E1101 00:35:27.813922 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.814746 kubelet[2499]: E1101 00:35:27.814146 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.814746 kubelet[2499]: W1101 00:35:27.814153 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.814746 kubelet[2499]: E1101 00:35:27.814162 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.814746 kubelet[2499]: E1101 00:35:27.814364 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.814746 kubelet[2499]: W1101 00:35:27.814372 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.814746 kubelet[2499]: E1101 00:35:27.814380 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.814746 kubelet[2499]: E1101 00:35:27.814577 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.814995 kubelet[2499]: W1101 00:35:27.814586 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.814995 kubelet[2499]: E1101 00:35:27.814614 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.814995 kubelet[2499]: E1101 00:35:27.814796 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.814995 kubelet[2499]: W1101 00:35:27.814804 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.814995 kubelet[2499]: E1101 00:35:27.814812 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.815104 kubelet[2499]: E1101 00:35:27.815044 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.815104 kubelet[2499]: W1101 00:35:27.815057 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.815104 kubelet[2499]: E1101 00:35:27.815070 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.815290 kubelet[2499]: E1101 00:35:27.815273 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.815290 kubelet[2499]: W1101 00:35:27.815284 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.815343 kubelet[2499]: E1101 00:35:27.815292 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.815619 kubelet[2499]: E1101 00:35:27.815557 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.815619 kubelet[2499]: W1101 00:35:27.815570 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.815619 kubelet[2499]: E1101 00:35:27.815577 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.815619 kubelet[2499]: I1101 00:35:27.815613 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/31c28b53-e76c-45d5-b66c-cb1d82d504b6-socket-dir\") pod \"csi-node-driver-jzfns\" (UID: \"31c28b53-e76c-45d5-b66c-cb1d82d504b6\") " pod="calico-system/csi-node-driver-jzfns" Nov 1 00:35:27.815832 kubelet[2499]: E1101 00:35:27.815811 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.815832 kubelet[2499]: W1101 00:35:27.815824 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.815832 kubelet[2499]: E1101 00:35:27.815842 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.815832 kubelet[2499]: I1101 00:35:27.815855 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-278m6\" (UniqueName: \"kubernetes.io/projected/31c28b53-e76c-45d5-b66c-cb1d82d504b6-kube-api-access-278m6\") pod \"csi-node-driver-jzfns\" (UID: \"31c28b53-e76c-45d5-b66c-cb1d82d504b6\") " pod="calico-system/csi-node-driver-jzfns" Nov 1 00:35:27.816413 kubelet[2499]: E1101 00:35:27.816170 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.816413 kubelet[2499]: W1101 00:35:27.816179 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.816413 kubelet[2499]: E1101 00:35:27.816198 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.816413 kubelet[2499]: I1101 00:35:27.816215 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/31c28b53-e76c-45d5-b66c-cb1d82d504b6-registration-dir\") pod \"csi-node-driver-jzfns\" (UID: \"31c28b53-e76c-45d5-b66c-cb1d82d504b6\") " pod="calico-system/csi-node-driver-jzfns" Nov 1 00:35:27.816703 kubelet[2499]: E1101 00:35:27.816686 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.816703 kubelet[2499]: W1101 00:35:27.816699 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.816777 kubelet[2499]: E1101 00:35:27.816710 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.816777 kubelet[2499]: I1101 00:35:27.816724 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/31c28b53-e76c-45d5-b66c-cb1d82d504b6-varrun\") pod \"csi-node-driver-jzfns\" (UID: \"31c28b53-e76c-45d5-b66c-cb1d82d504b6\") " pod="calico-system/csi-node-driver-jzfns" Nov 1 00:35:27.816964 kubelet[2499]: E1101 00:35:27.816948 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.816964 kubelet[2499]: W1101 00:35:27.816960 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.817026 kubelet[2499]: E1101 00:35:27.816970 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.817026 kubelet[2499]: I1101 00:35:27.816984 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31c28b53-e76c-45d5-b66c-cb1d82d504b6-kubelet-dir\") pod \"csi-node-driver-jzfns\" (UID: \"31c28b53-e76c-45d5-b66c-cb1d82d504b6\") " pod="calico-system/csi-node-driver-jzfns" Nov 1 00:35:27.817207 kubelet[2499]: E1101 00:35:27.817192 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.817207 kubelet[2499]: W1101 00:35:27.817203 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.817287 kubelet[2499]: E1101 00:35:27.817253 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.817407 kubelet[2499]: E1101 00:35:27.817388 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.817407 kubelet[2499]: W1101 00:35:27.817399 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.817499 kubelet[2499]: E1101 00:35:27.817469 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.817668 kubelet[2499]: E1101 00:35:27.817590 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.817668 kubelet[2499]: W1101 00:35:27.817660 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.817809 kubelet[2499]: E1101 00:35:27.817719 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.817873 kubelet[2499]: E1101 00:35:27.817859 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.817873 kubelet[2499]: W1101 00:35:27.817870 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.817957 kubelet[2499]: E1101 00:35:27.817936 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.818070 kubelet[2499]: E1101 00:35:27.818054 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.818070 kubelet[2499]: W1101 00:35:27.818064 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.818139 kubelet[2499]: E1101 00:35:27.818107 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.818296 kubelet[2499]: E1101 00:35:27.818274 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.818325 kubelet[2499]: W1101 00:35:27.818305 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.818325 kubelet[2499]: E1101 00:35:27.818315 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.819071 kubelet[2499]: E1101 00:35:27.818942 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.819071 kubelet[2499]: W1101 00:35:27.818955 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.819071 kubelet[2499]: E1101 00:35:27.818964 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.819420 kubelet[2499]: E1101 00:35:27.819396 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.819420 kubelet[2499]: W1101 00:35:27.819412 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.819420 kubelet[2499]: E1101 00:35:27.819421 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.819827 kubelet[2499]: E1101 00:35:27.819809 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.819827 kubelet[2499]: W1101 00:35:27.819821 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.819938 kubelet[2499]: E1101 00:35:27.819830 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.820132 kubelet[2499]: E1101 00:35:27.820107 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.820132 kubelet[2499]: W1101 00:35:27.820122 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.820132 kubelet[2499]: E1101 00:35:27.820130 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.833349 kubelet[2499]: E1101 00:35:27.833317 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:27.833828 containerd[1468]: time="2025-11-01T00:35:27.833794934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jx6wz,Uid:7d0e37b5-6436-4e09-8516-2e963d120c1b,Namespace:calico-system,Attempt:0,}" Nov 1 00:35:27.860238 containerd[1468]: time="2025-11-01T00:35:27.859917616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:35:27.860238 containerd[1468]: time="2025-11-01T00:35:27.859986161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:35:27.860238 containerd[1468]: time="2025-11-01T00:35:27.860018821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:27.860805 containerd[1468]: time="2025-11-01T00:35:27.860751416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:27.878727 systemd[1]: Started cri-containerd-bd185ea133c9383a18d8c1a1378121ab196b6a3a8d9c00e08cd9764aa59ce5be.scope - libcontainer container bd185ea133c9383a18d8c1a1378121ab196b6a3a8d9c00e08cd9764aa59ce5be. Nov 1 00:35:27.899944 containerd[1468]: time="2025-11-01T00:35:27.899908925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jx6wz,Uid:7d0e37b5-6436-4e09-8516-2e963d120c1b,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd185ea133c9383a18d8c1a1378121ab196b6a3a8d9c00e08cd9764aa59ce5be\"" Nov 1 00:35:27.900539 kubelet[2499]: E1101 00:35:27.900519 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:27.918414 kubelet[2499]: E1101 00:35:27.918387 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.918414 kubelet[2499]: W1101 00:35:27.918403 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.918414 kubelet[2499]: E1101 00:35:27.918421 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.918711 kubelet[2499]: E1101 00:35:27.918692 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.918711 kubelet[2499]: W1101 00:35:27.918701 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.918769 kubelet[2499]: E1101 00:35:27.918714 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.918961 kubelet[2499]: E1101 00:35:27.918947 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.918961 kubelet[2499]: W1101 00:35:27.918957 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.919019 kubelet[2499]: E1101 00:35:27.918972 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.919280 kubelet[2499]: E1101 00:35:27.919256 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.919280 kubelet[2499]: W1101 00:35:27.919275 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.919398 kubelet[2499]: E1101 00:35:27.919302 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.919550 kubelet[2499]: E1101 00:35:27.919528 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.919550 kubelet[2499]: W1101 00:35:27.919539 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.919616 kubelet[2499]: E1101 00:35:27.919552 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.919774 kubelet[2499]: E1101 00:35:27.919760 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.919774 kubelet[2499]: W1101 00:35:27.919770 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.919829 kubelet[2499]: E1101 00:35:27.919782 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.920131 kubelet[2499]: E1101 00:35:27.920107 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.920131 kubelet[2499]: W1101 00:35:27.920130 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.920201 kubelet[2499]: E1101 00:35:27.920158 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.920403 kubelet[2499]: E1101 00:35:27.920389 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.920403 kubelet[2499]: W1101 00:35:27.920399 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.920463 kubelet[2499]: E1101 00:35:27.920427 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.920608 kubelet[2499]: E1101 00:35:27.920581 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.920608 kubelet[2499]: W1101 00:35:27.920592 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.920725 kubelet[2499]: E1101 00:35:27.920695 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.920849 kubelet[2499]: E1101 00:35:27.920834 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.920849 kubelet[2499]: W1101 00:35:27.920845 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.920906 kubelet[2499]: E1101 00:35:27.920871 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.921098 kubelet[2499]: E1101 00:35:27.921070 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.921098 kubelet[2499]: W1101 00:35:27.921081 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.921098 kubelet[2499]: E1101 00:35:27.921094 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.921300 kubelet[2499]: E1101 00:35:27.921286 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.921300 kubelet[2499]: W1101 00:35:27.921296 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.921354 kubelet[2499]: E1101 00:35:27.921308 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.921557 kubelet[2499]: E1101 00:35:27.921541 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.921583 kubelet[2499]: W1101 00:35:27.921555 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.921583 kubelet[2499]: E1101 00:35:27.921575 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.921819 kubelet[2499]: E1101 00:35:27.921805 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.921851 kubelet[2499]: W1101 00:35:27.921827 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.921851 kubelet[2499]: E1101 00:35:27.921841 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.922045 kubelet[2499]: E1101 00:35:27.922031 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.922045 kubelet[2499]: W1101 00:35:27.922042 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.922096 kubelet[2499]: E1101 00:35:27.922056 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.922275 kubelet[2499]: E1101 00:35:27.922261 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.922275 kubelet[2499]: W1101 00:35:27.922272 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.922379 kubelet[2499]: E1101 00:35:27.922286 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.922538 kubelet[2499]: E1101 00:35:27.922524 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.922538 kubelet[2499]: W1101 00:35:27.922535 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.922583 kubelet[2499]: E1101 00:35:27.922564 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.922788 kubelet[2499]: E1101 00:35:27.922774 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.922788 kubelet[2499]: W1101 00:35:27.922785 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.922848 kubelet[2499]: E1101 00:35:27.922812 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.922998 kubelet[2499]: E1101 00:35:27.922973 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.922998 kubelet[2499]: W1101 00:35:27.922985 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.923140 kubelet[2499]: E1101 00:35:27.923038 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.923192 kubelet[2499]: E1101 00:35:27.923178 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.923192 kubelet[2499]: W1101 00:35:27.923188 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.923253 kubelet[2499]: E1101 00:35:27.923199 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.923402 kubelet[2499]: E1101 00:35:27.923389 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.923402 kubelet[2499]: W1101 00:35:27.923399 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.923454 kubelet[2499]: E1101 00:35:27.923407 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.923678 kubelet[2499]: E1101 00:35:27.923662 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.923678 kubelet[2499]: W1101 00:35:27.923674 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.923747 kubelet[2499]: E1101 00:35:27.923683 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.923922 kubelet[2499]: E1101 00:35:27.923903 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.923922 kubelet[2499]: W1101 00:35:27.923917 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.923992 kubelet[2499]: E1101 00:35:27.923928 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.924154 kubelet[2499]: E1101 00:35:27.924126 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.924154 kubelet[2499]: W1101 00:35:27.924143 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.924154 kubelet[2499]: E1101 00:35:27.924151 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.924369 kubelet[2499]: E1101 00:35:27.924355 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.924369 kubelet[2499]: W1101 00:35:27.924366 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.924426 kubelet[2499]: E1101 00:35:27.924380 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:27.930142 kubelet[2499]: E1101 00:35:27.930126 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:27.930142 kubelet[2499]: W1101 00:35:27.930139 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:27.930219 kubelet[2499]: E1101 00:35:27.930149 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:29.011831 kubelet[2499]: E1101 00:35:29.011775 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jzfns" podUID="31c28b53-e76c-45d5-b66c-cb1d82d504b6" Nov 1 00:35:29.363475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3578815789.mount: Deactivated successfully. Nov 1 00:35:29.842804 containerd[1468]: time="2025-11-01T00:35:29.842745726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:35:29.843506 containerd[1468]: time="2025-11-01T00:35:29.843474830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 00:35:29.844546 containerd[1468]: time="2025-11-01T00:35:29.844523990Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:35:29.846481 containerd[1468]: time="2025-11-01T00:35:29.846437110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:35:29.847075 containerd[1468]: time="2025-11-01T00:35:29.847034583Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.0996025s" Nov 1 00:35:29.847109 containerd[1468]: time="2025-11-01T00:35:29.847072893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:35:29.850724 containerd[1468]: time="2025-11-01T00:35:29.850687948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:35:29.866757 containerd[1468]: time="2025-11-01T00:35:29.866715652Z" level=info msg="CreateContainer within sandbox \"69969274ba416a6f20625e61aadb5092de0cd241d658671a71b4e65605e52511\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:35:29.879712 containerd[1468]: time="2025-11-01T00:35:29.879666658Z" level=info msg="CreateContainer within sandbox \"69969274ba416a6f20625e61aadb5092de0cd241d658671a71b4e65605e52511\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d70bf2f6b3c1c5f223fb8e8f1abeabccb732fd8a2fc8ec4cf68797dab0780b38\"" Nov 1 00:35:29.880127 containerd[1468]: time="2025-11-01T00:35:29.880075676Z" level=info msg="StartContainer for \"d70bf2f6b3c1c5f223fb8e8f1abeabccb732fd8a2fc8ec4cf68797dab0780b38\"" Nov 1 00:35:29.906760 systemd[1]: Started cri-containerd-d70bf2f6b3c1c5f223fb8e8f1abeabccb732fd8a2fc8ec4cf68797dab0780b38.scope - libcontainer container d70bf2f6b3c1c5f223fb8e8f1abeabccb732fd8a2fc8ec4cf68797dab0780b38. Nov 1 00:35:29.945089 containerd[1468]: time="2025-11-01T00:35:29.945056366Z" level=info msg="StartContainer for \"d70bf2f6b3c1c5f223fb8e8f1abeabccb732fd8a2fc8ec4cf68797dab0780b38\" returns successfully" Nov 1 00:35:30.050713 kubelet[2499]: E1101 00:35:30.050674 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:30.063014 kubelet[2499]: I1101 00:35:30.062814 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bb78b4bc7-kdfkf" podStartSLOduration=0.959357154 podStartE2EDuration="3.062796423s" podCreationTimestamp="2025-11-01 00:35:27 +0000 UTC" firstStartedPulling="2025-11-01 00:35:27.747083597 +0000 UTC m=+17.821693810" lastFinishedPulling="2025-11-01 00:35:29.850522866 +0000 UTC m=+19.925133079" observedRunningTime="2025-11-01 00:35:30.059921184 +0000 UTC m=+20.134531397" watchObservedRunningTime="2025-11-01 00:35:30.062796423 +0000 UTC m=+20.137406626" Nov 1 00:35:30.130439 kubelet[2499]: E1101 00:35:30.130317 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.130439 kubelet[2499]: W1101 00:35:30.130346 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.190220 kubelet[2499]: E1101 00:35:30.190171 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.190653 kubelet[2499]: E1101 00:35:30.190618 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.190653 kubelet[2499]: W1101 00:35:30.190639 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.190802 kubelet[2499]: E1101 00:35:30.190664 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.190941 kubelet[2499]: E1101 00:35:30.190924 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.190941 kubelet[2499]: W1101 00:35:30.190937 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.191023 kubelet[2499]: E1101 00:35:30.190947 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.191386 kubelet[2499]: E1101 00:35:30.191290 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.191386 kubelet[2499]: W1101 00:35:30.191314 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.191569 kubelet[2499]: E1101 00:35:30.191549 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.191901 kubelet[2499]: E1101 00:35:30.191877 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.191901 kubelet[2499]: W1101 00:35:30.191893 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.191901 kubelet[2499]: E1101 00:35:30.191901 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.192229 kubelet[2499]: E1101 00:35:30.192068 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.192229 kubelet[2499]: W1101 00:35:30.192076 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.192229 kubelet[2499]: E1101 00:35:30.192083 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.192311 kubelet[2499]: E1101 00:35:30.192258 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.192311 kubelet[2499]: W1101 00:35:30.192267 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.192311 kubelet[2499]: E1101 00:35:30.192274 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.192912 kubelet[2499]: E1101 00:35:30.192468 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.192912 kubelet[2499]: W1101 00:35:30.192476 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.192912 kubelet[2499]: E1101 00:35:30.192483 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.193125 kubelet[2499]: E1101 00:35:30.193109 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.193125 kubelet[2499]: W1101 00:35:30.193118 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.193176 kubelet[2499]: E1101 00:35:30.193127 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.193380 kubelet[2499]: E1101 00:35:30.193349 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.193380 kubelet[2499]: W1101 00:35:30.193373 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.193539 kubelet[2499]: E1101 00:35:30.193399 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.193681 kubelet[2499]: E1101 00:35:30.193663 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.193681 kubelet[2499]: W1101 00:35:30.193678 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.193681 kubelet[2499]: E1101 00:35:30.193687 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.193913 kubelet[2499]: E1101 00:35:30.193896 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.193913 kubelet[2499]: W1101 00:35:30.193908 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.194002 kubelet[2499]: E1101 00:35:30.193918 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.194352 kubelet[2499]: E1101 00:35:30.194255 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.194352 kubelet[2499]: W1101 00:35:30.194267 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.194352 kubelet[2499]: E1101 00:35:30.194278 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.194626 kubelet[2499]: E1101 00:35:30.194612 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.194853 kubelet[2499]: W1101 00:35:30.194655 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.194853 kubelet[2499]: E1101 00:35:30.194669 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.194997 kubelet[2499]: E1101 00:35:30.194980 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.194997 kubelet[2499]: W1101 00:35:30.194991 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.194997 kubelet[2499]: E1101 00:35:30.195021 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.195414 kubelet[2499]: E1101 00:35:30.195381 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.195414 kubelet[2499]: W1101 00:35:30.195395 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.195414 kubelet[2499]: E1101 00:35:30.195406 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.195732 kubelet[2499]: E1101 00:35:30.195715 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.195732 kubelet[2499]: W1101 00:35:30.195728 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.195810 kubelet[2499]: E1101 00:35:30.195747 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.196067 kubelet[2499]: E1101 00:35:30.196025 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.196067 kubelet[2499]: W1101 00:35:30.196037 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.196067 kubelet[2499]: E1101 00:35:30.196052 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.196391 kubelet[2499]: E1101 00:35:30.196258 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.196391 kubelet[2499]: W1101 00:35:30.196266 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.196391 kubelet[2499]: E1101 00:35:30.196276 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.196520 kubelet[2499]: E1101 00:35:30.196496 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.196520 kubelet[2499]: W1101 00:35:30.196510 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.196584 kubelet[2499]: E1101 00:35:30.196527 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.196786 kubelet[2499]: E1101 00:35:30.196770 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.196786 kubelet[2499]: W1101 00:35:30.196780 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.196880 kubelet[2499]: E1101 00:35:30.196846 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.196977 kubelet[2499]: E1101 00:35:30.196961 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.197037 kubelet[2499]: W1101 00:35:30.196983 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.197506 kubelet[2499]: E1101 00:35:30.197015 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.197506 kubelet[2499]: E1101 00:35:30.197224 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.197506 kubelet[2499]: W1101 00:35:30.197238 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.197506 kubelet[2499]: E1101 00:35:30.197287 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.198141 kubelet[2499]: E1101 00:35:30.198116 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.198141 kubelet[2499]: W1101 00:35:30.198132 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.198770 kubelet[2499]: E1101 00:35:30.198148 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.200690 kubelet[2499]: E1101 00:35:30.200671 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.200690 kubelet[2499]: W1101 00:35:30.200685 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.200918 kubelet[2499]: E1101 00:35:30.200897 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.200918 kubelet[2499]: W1101 00:35:30.200916 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.202660 kubelet[2499]: E1101 00:35:30.201141 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.202660 kubelet[2499]: E1101 00:35:30.201167 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.202734 kubelet[2499]: E1101 00:35:30.202703 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.202734 kubelet[2499]: W1101 00:35:30.202713 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.202784 kubelet[2499]: E1101 00:35:30.202743 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.202995 kubelet[2499]: E1101 00:35:30.202974 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.202995 kubelet[2499]: W1101 00:35:30.202990 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.203062 kubelet[2499]: E1101 00:35:30.203023 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.203205 kubelet[2499]: E1101 00:35:30.203186 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.203205 kubelet[2499]: W1101 00:35:30.203199 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.203205 kubelet[2499]: E1101 00:35:30.203215 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.203457 kubelet[2499]: E1101 00:35:30.203433 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.203457 kubelet[2499]: W1101 00:35:30.203454 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.203663 kubelet[2499]: E1101 00:35:30.203477 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.204604 kubelet[2499]: E1101 00:35:30.204283 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.204604 kubelet[2499]: W1101 00:35:30.204298 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.204604 kubelet[2499]: E1101 00:35:30.204311 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.204914 kubelet[2499]: E1101 00:35:30.204901 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.204980 kubelet[2499]: W1101 00:35:30.204969 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.205042 kubelet[2499]: E1101 00:35:30.205031 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:30.205308 kubelet[2499]: E1101 00:35:30.205295 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:30.205384 kubelet[2499]: W1101 00:35:30.205355 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:30.205384 kubelet[2499]: E1101 00:35:30.205368 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.012075 kubelet[2499]: E1101 00:35:31.012027 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jzfns" podUID="31c28b53-e76c-45d5-b66c-cb1d82d504b6" Nov 1 00:35:31.051674 kubelet[2499]: I1101 00:35:31.051643 2499 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:35:31.052075 kubelet[2499]: E1101 00:35:31.051923 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:31.100558 kubelet[2499]: E1101 00:35:31.100523 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.100558 kubelet[2499]: W1101 00:35:31.100543 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.100558 kubelet[2499]: E1101 00:35:31.100563 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.100846 kubelet[2499]: E1101 00:35:31.100824 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.100846 kubelet[2499]: W1101 00:35:31.100837 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.100937 kubelet[2499]: E1101 00:35:31.100852 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.101750 kubelet[2499]: E1101 00:35:31.101723 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.101750 kubelet[2499]: W1101 00:35:31.101737 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.101750 kubelet[2499]: E1101 00:35:31.101746 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.102417 kubelet[2499]: E1101 00:35:31.102397 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.102417 kubelet[2499]: W1101 00:35:31.102411 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.102417 kubelet[2499]: E1101 00:35:31.102421 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.102643 kubelet[2499]: E1101 00:35:31.102629 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.102643 kubelet[2499]: W1101 00:35:31.102639 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.102702 kubelet[2499]: E1101 00:35:31.102648 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.102866 kubelet[2499]: E1101 00:35:31.102843 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.102866 kubelet[2499]: W1101 00:35:31.102854 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.102866 kubelet[2499]: E1101 00:35:31.102862 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.103063 kubelet[2499]: E1101 00:35:31.103044 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.103063 kubelet[2499]: W1101 00:35:31.103056 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.103063 kubelet[2499]: E1101 00:35:31.103063 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.103245 kubelet[2499]: E1101 00:35:31.103230 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.103245 kubelet[2499]: W1101 00:35:31.103241 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.103332 kubelet[2499]: E1101 00:35:31.103248 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.103487 kubelet[2499]: E1101 00:35:31.103471 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.103487 kubelet[2499]: W1101 00:35:31.103483 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.103568 kubelet[2499]: E1101 00:35:31.103491 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.103728 kubelet[2499]: E1101 00:35:31.103676 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.103728 kubelet[2499]: W1101 00:35:31.103684 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.103728 kubelet[2499]: E1101 00:35:31.103692 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.103869 kubelet[2499]: E1101 00:35:31.103855 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.103869 kubelet[2499]: W1101 00:35:31.103863 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.103944 kubelet[2499]: E1101 00:35:31.103870 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.104075 kubelet[2499]: E1101 00:35:31.104051 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.104075 kubelet[2499]: W1101 00:35:31.104065 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.104075 kubelet[2499]: E1101 00:35:31.104073 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.104311 kubelet[2499]: E1101 00:35:31.104290 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.104311 kubelet[2499]: W1101 00:35:31.104301 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.104311 kubelet[2499]: E1101 00:35:31.104310 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.104550 kubelet[2499]: E1101 00:35:31.104491 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.104550 kubelet[2499]: W1101 00:35:31.104503 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.104550 kubelet[2499]: E1101 00:35:31.104511 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.104738 kubelet[2499]: E1101 00:35:31.104722 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.104738 kubelet[2499]: W1101 00:35:31.104734 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.104794 kubelet[2499]: E1101 00:35:31.104753 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.105069 kubelet[2499]: E1101 00:35:31.105047 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.105069 kubelet[2499]: W1101 00:35:31.105059 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.105069 kubelet[2499]: E1101 00:35:31.105067 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.105334 kubelet[2499]: E1101 00:35:31.105312 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.105334 kubelet[2499]: W1101 00:35:31.105324 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.105395 kubelet[2499]: E1101 00:35:31.105345 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.105627 kubelet[2499]: E1101 00:35:31.105613 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.105627 kubelet[2499]: W1101 00:35:31.105624 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.105689 kubelet[2499]: E1101 00:35:31.105635 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.105859 kubelet[2499]: E1101 00:35:31.105844 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.105859 kubelet[2499]: W1101 00:35:31.105857 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.105911 kubelet[2499]: E1101 00:35:31.105876 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.106129 kubelet[2499]: E1101 00:35:31.106109 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.106129 kubelet[2499]: W1101 00:35:31.106121 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.106129 kubelet[2499]: E1101 00:35:31.106132 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.106364 kubelet[2499]: E1101 00:35:31.106347 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.106364 kubelet[2499]: W1101 00:35:31.106360 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.106764 kubelet[2499]: E1101 00:35:31.106471 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.106764 kubelet[2499]: E1101 00:35:31.106586 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.106764 kubelet[2499]: W1101 00:35:31.106654 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.106764 kubelet[2499]: E1101 00:35:31.106683 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.106982 kubelet[2499]: E1101 00:35:31.106963 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.106982 kubelet[2499]: W1101 00:35:31.106978 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.107037 kubelet[2499]: E1101 00:35:31.107015 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.107251 kubelet[2499]: E1101 00:35:31.107229 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.107251 kubelet[2499]: W1101 00:35:31.107244 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.107305 kubelet[2499]: E1101 00:35:31.107258 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.107698 kubelet[2499]: E1101 00:35:31.107680 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.107698 kubelet[2499]: W1101 00:35:31.107693 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.107751 kubelet[2499]: E1101 00:35:31.107713 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.108042 kubelet[2499]: E1101 00:35:31.108025 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.108042 kubelet[2499]: W1101 00:35:31.108038 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.108104 kubelet[2499]: E1101 00:35:31.108057 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.109197 kubelet[2499]: E1101 00:35:31.108587 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.109197 kubelet[2499]: W1101 00:35:31.108620 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.109197 kubelet[2499]: E1101 00:35:31.108635 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.109197 kubelet[2499]: E1101 00:35:31.108845 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.109197 kubelet[2499]: W1101 00:35:31.108855 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.109197 kubelet[2499]: E1101 00:35:31.108968 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.109197 kubelet[2499]: E1101 00:35:31.109119 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.109197 kubelet[2499]: W1101 00:35:31.109142 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.109197 kubelet[2499]: E1101 00:35:31.109178 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.109462 kubelet[2499]: E1101 00:35:31.109364 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.109462 kubelet[2499]: W1101 00:35:31.109380 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.109462 kubelet[2499]: E1101 00:35:31.109408 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.109587 kubelet[2499]: E1101 00:35:31.109564 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.109587 kubelet[2499]: W1101 00:35:31.109575 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.109587 kubelet[2499]: E1101 00:35:31.109622 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.111672 kubelet[2499]: E1101 00:35:31.111646 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.111672 kubelet[2499]: W1101 00:35:31.111658 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.111672 kubelet[2499]: E1101 00:35:31.111669 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.112188 kubelet[2499]: E1101 00:35:31.112168 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:35:31.112188 kubelet[2499]: W1101 00:35:31.112181 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:35:31.112270 kubelet[2499]: E1101 00:35:31.112192 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:35:31.150735 containerd[1468]: time="2025-11-01T00:35:31.150697036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:35:31.151529 containerd[1468]: time="2025-11-01T00:35:31.151482298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 00:35:31.152716 containerd[1468]: time="2025-11-01T00:35:31.152669516Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:35:31.154712 containerd[1468]: time="2025-11-01T00:35:31.154660480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:35:31.155191 containerd[1468]: time="2025-11-01T00:35:31.155149156Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.304424492s" Nov 1 00:35:31.155191 containerd[1468]: time="2025-11-01T00:35:31.155187958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:35:31.162123 containerd[1468]: time="2025-11-01T00:35:31.162097448Z" level=info msg="CreateContainer within sandbox \"bd185ea133c9383a18d8c1a1378121ab196b6a3a8d9c00e08cd9764aa59ce5be\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:35:31.176768 containerd[1468]: time="2025-11-01T00:35:31.176735110Z" level=info msg="CreateContainer within sandbox \"bd185ea133c9383a18d8c1a1378121ab196b6a3a8d9c00e08cd9764aa59ce5be\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a75cb72670b074669b8d1668da90eb587d3e0bc202ab363afd264988325b387f\"" Nov 1 00:35:31.177048 containerd[1468]: time="2025-11-01T00:35:31.177023108Z" level=info msg="StartContainer for \"a75cb72670b074669b8d1668da90eb587d3e0bc202ab363afd264988325b387f\"" Nov 1 00:35:31.204213 systemd[1]: Started cri-containerd-a75cb72670b074669b8d1668da90eb587d3e0bc202ab363afd264988325b387f.scope - libcontainer container a75cb72670b074669b8d1668da90eb587d3e0bc202ab363afd264988325b387f. Nov 1 00:35:31.289105 systemd[1]: cri-containerd-a75cb72670b074669b8d1668da90eb587d3e0bc202ab363afd264988325b387f.scope: Deactivated successfully. Nov 1 00:35:31.296546 containerd[1468]: time="2025-11-01T00:35:31.296494623Z" level=info msg="StartContainer for \"a75cb72670b074669b8d1668da90eb587d3e0bc202ab363afd264988325b387f\" returns successfully" Nov 1 00:35:31.336141 containerd[1468]: time="2025-11-01T00:35:31.336074736Z" level=info msg="shim disconnected" id=a75cb72670b074669b8d1668da90eb587d3e0bc202ab363afd264988325b387f namespace=k8s.io Nov 1 00:35:31.336141 containerd[1468]: time="2025-11-01T00:35:31.336134647Z" level=warning msg="cleaning up after shim disconnected" id=a75cb72670b074669b8d1668da90eb587d3e0bc202ab363afd264988325b387f namespace=k8s.io Nov 1 00:35:31.336141 containerd[1468]: time="2025-11-01T00:35:31.336147871Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:35:31.861361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a75cb72670b074669b8d1668da90eb587d3e0bc202ab363afd264988325b387f-rootfs.mount: Deactivated successfully. Nov 1 00:35:32.059835 kubelet[2499]: E1101 00:35:32.059803 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:32.060552 containerd[1468]: time="2025-11-01T00:35:32.060508729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:35:33.012009 kubelet[2499]: E1101 00:35:33.011961 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jzfns" podUID="31c28b53-e76c-45d5-b66c-cb1d82d504b6" Nov 1 00:35:34.609894 containerd[1468]: time="2025-11-01T00:35:34.609849173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:35:34.610586 containerd[1468]: time="2025-11-01T00:35:34.610528795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 00:35:34.611564 containerd[1468]: time="2025-11-01T00:35:34.611535389Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:35:34.613536 containerd[1468]: time="2025-11-01T00:35:34.613507382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:35:34.614192 containerd[1468]: time="2025-11-01T00:35:34.614153973Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.553609278s" Nov 1 00:35:34.614192 containerd[1468]: time="2025-11-01T00:35:34.614189037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:35:34.619696 containerd[1468]: time="2025-11-01T00:35:34.619666218Z" level=info msg="CreateContainer within sandbox \"bd185ea133c9383a18d8c1a1378121ab196b6a3a8d9c00e08cd9764aa59ce5be\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:35:34.634181 containerd[1468]: time="2025-11-01T00:35:34.634142003Z" level=info msg="CreateContainer within sandbox \"bd185ea133c9383a18d8c1a1378121ab196b6a3a8d9c00e08cd9764aa59ce5be\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"99db2f9f42ac59666ea5a6cba4f974835602574b67f63790b90b7b6041781337\"" Nov 1 00:35:34.635913 containerd[1468]: time="2025-11-01T00:35:34.635320214Z" level=info msg="StartContainer for \"99db2f9f42ac59666ea5a6cba4f974835602574b67f63790b90b7b6041781337\"" Nov 1 00:35:34.675725 systemd[1]: Started cri-containerd-99db2f9f42ac59666ea5a6cba4f974835602574b67f63790b90b7b6041781337.scope - libcontainer container 99db2f9f42ac59666ea5a6cba4f974835602574b67f63790b90b7b6041781337. Nov 1 00:35:34.704960 containerd[1468]: time="2025-11-01T00:35:34.704837936Z" level=info msg="StartContainer for \"99db2f9f42ac59666ea5a6cba4f974835602574b67f63790b90b7b6041781337\" returns successfully" Nov 1 00:35:35.040615 kubelet[2499]: E1101 00:35:35.038583 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jzfns" podUID="31c28b53-e76c-45d5-b66c-cb1d82d504b6" Nov 1 00:35:35.067747 kubelet[2499]: E1101 00:35:35.067702 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:36.069376 kubelet[2499]: E1101 00:35:36.069288 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:36.310443 containerd[1468]: time="2025-11-01T00:35:36.310391765Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:35:36.313681 systemd[1]: cri-containerd-99db2f9f42ac59666ea5a6cba4f974835602574b67f63790b90b7b6041781337.scope: Deactivated successfully. Nov 1 00:35:36.334403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99db2f9f42ac59666ea5a6cba4f974835602574b67f63790b90b7b6041781337-rootfs.mount: Deactivated successfully. Nov 1 00:35:36.338621 containerd[1468]: time="2025-11-01T00:35:36.338549066Z" level=info msg="shim disconnected" id=99db2f9f42ac59666ea5a6cba4f974835602574b67f63790b90b7b6041781337 namespace=k8s.io Nov 1 00:35:36.338621 containerd[1468]: time="2025-11-01T00:35:36.338611531Z" level=warning msg="cleaning up after shim disconnected" id=99db2f9f42ac59666ea5a6cba4f974835602574b67f63790b90b7b6041781337 namespace=k8s.io Nov 1 00:35:36.338621 containerd[1468]: time="2025-11-01T00:35:36.338621680Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:35:36.345642 kubelet[2499]: I1101 00:35:36.345581 2499 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:35:36.376344 systemd[1]: Created slice kubepods-burstable-pod02d687a0_8306_485c_897b_e3fc603e4632.slice - libcontainer container kubepods-burstable-pod02d687a0_8306_485c_897b_e3fc603e4632.slice. Nov 1 00:35:36.386900 systemd[1]: Created slice kubepods-burstable-pod8731e9b0_7c90_4504_b50f_7b034a8b8a07.slice - libcontainer container kubepods-burstable-pod8731e9b0_7c90_4504_b50f_7b034a8b8a07.slice. Nov 1 00:35:36.395202 systemd[1]: Created slice kubepods-besteffort-pod4ca70b04_3681_42b1_b3b8_746e67038cfe.slice - libcontainer container kubepods-besteffort-pod4ca70b04_3681_42b1_b3b8_746e67038cfe.slice. Nov 1 00:35:36.401366 systemd[1]: Created slice kubepods-besteffort-podd57a8509_e37c_4d69_93aa_35fdadef5de6.slice - libcontainer container kubepods-besteffort-podd57a8509_e37c_4d69_93aa_35fdadef5de6.slice. Nov 1 00:35:36.407442 systemd[1]: Created slice kubepods-besteffort-pod331a1960_88ad_4608_9f70_708ee400d030.slice - libcontainer container kubepods-besteffort-pod331a1960_88ad_4608_9f70_708ee400d030.slice. Nov 1 00:35:36.413193 systemd[1]: Created slice kubepods-besteffort-pod42b9da1b_c5f5_468c_9b0b_bd955feccb34.slice - libcontainer container kubepods-besteffort-pod42b9da1b_c5f5_468c_9b0b_bd955feccb34.slice. Nov 1 00:35:36.418393 systemd[1]: Created slice kubepods-besteffort-pod376113d4_ed9e_4de6_ab71_daa7d077b967.slice - libcontainer container kubepods-besteffort-pod376113d4_ed9e_4de6_ab71_daa7d077b967.slice. Nov 1 00:35:36.540672 kubelet[2499]: I1101 00:35:36.540618 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02d687a0-8306-485c-897b-e3fc603e4632-config-volume\") pod \"coredns-668d6bf9bc-ct4jw\" (UID: \"02d687a0-8306-485c-897b-e3fc603e4632\") " pod="kube-system/coredns-668d6bf9bc-ct4jw" Nov 1 00:35:36.540672 kubelet[2499]: I1101 00:35:36.540657 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8731e9b0-7c90-4504-b50f-7b034a8b8a07-config-volume\") pod \"coredns-668d6bf9bc-g4fmz\" (UID: \"8731e9b0-7c90-4504-b50f-7b034a8b8a07\") " pod="kube-system/coredns-668d6bf9bc-g4fmz" Nov 1 00:35:36.540672 kubelet[2499]: I1101 00:35:36.540676 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jffx9\" (UniqueName: \"kubernetes.io/projected/d57a8509-e37c-4d69-93aa-35fdadef5de6-kube-api-access-jffx9\") pod \"calico-apiserver-68fc7bb9b7-tvhcs\" (UID: \"d57a8509-e37c-4d69-93aa-35fdadef5de6\") " pod="calico-apiserver/calico-apiserver-68fc7bb9b7-tvhcs" Nov 1 00:35:36.540856 kubelet[2499]: I1101 00:35:36.540695 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md9zk\" (UniqueName: \"kubernetes.io/projected/02d687a0-8306-485c-897b-e3fc603e4632-kube-api-access-md9zk\") pod \"coredns-668d6bf9bc-ct4jw\" (UID: \"02d687a0-8306-485c-897b-e3fc603e4632\") " pod="kube-system/coredns-668d6bf9bc-ct4jw" Nov 1 00:35:36.540856 kubelet[2499]: I1101 00:35:36.540712 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/376113d4-ed9e-4de6-ab71-daa7d077b967-whisker-ca-bundle\") pod \"whisker-59bc9c756c-z94hk\" (UID: \"376113d4-ed9e-4de6-ab71-daa7d077b967\") " pod="calico-system/whisker-59bc9c756c-z94hk" Nov 1 00:35:36.540856 kubelet[2499]: I1101 00:35:36.540727 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dmr8\" (UniqueName: \"kubernetes.io/projected/8731e9b0-7c90-4504-b50f-7b034a8b8a07-kube-api-access-7dmr8\") pod \"coredns-668d6bf9bc-g4fmz\" (UID: \"8731e9b0-7c90-4504-b50f-7b034a8b8a07\") " pod="kube-system/coredns-668d6bf9bc-g4fmz" Nov 1 00:35:36.540856 kubelet[2499]: I1101 00:35:36.540742 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ca70b04-3681-42b1-b3b8-746e67038cfe-tigera-ca-bundle\") pod \"calico-kube-controllers-64f94746cd-5r8bx\" (UID: \"4ca70b04-3681-42b1-b3b8-746e67038cfe\") " pod="calico-system/calico-kube-controllers-64f94746cd-5r8bx" Nov 1 00:35:36.540856 kubelet[2499]: I1101 00:35:36.540757 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djwph\" (UniqueName: \"kubernetes.io/projected/331a1960-88ad-4608-9f70-708ee400d030-kube-api-access-djwph\") pod \"goldmane-666569f655-rlz6p\" (UID: \"331a1960-88ad-4608-9f70-708ee400d030\") " pod="calico-system/goldmane-666569f655-rlz6p" Nov 1 00:35:36.540972 kubelet[2499]: I1101 00:35:36.540772 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6zbx\" (UniqueName: \"kubernetes.io/projected/376113d4-ed9e-4de6-ab71-daa7d077b967-kube-api-access-g6zbx\") pod \"whisker-59bc9c756c-z94hk\" (UID: \"376113d4-ed9e-4de6-ab71-daa7d077b967\") " pod="calico-system/whisker-59bc9c756c-z94hk" Nov 1 00:35:36.540972 kubelet[2499]: I1101 00:35:36.540839 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/42b9da1b-c5f5-468c-9b0b-bd955feccb34-calico-apiserver-certs\") pod \"calico-apiserver-68fc7bb9b7-c7qgt\" (UID: \"42b9da1b-c5f5-468c-9b0b-bd955feccb34\") " pod="calico-apiserver/calico-apiserver-68fc7bb9b7-c7qgt" Nov 1 00:35:36.540972 kubelet[2499]: I1101 00:35:36.540887 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d57a8509-e37c-4d69-93aa-35fdadef5de6-calico-apiserver-certs\") pod \"calico-apiserver-68fc7bb9b7-tvhcs\" (UID: \"d57a8509-e37c-4d69-93aa-35fdadef5de6\") " pod="calico-apiserver/calico-apiserver-68fc7bb9b7-tvhcs" Nov 1 00:35:36.540972 kubelet[2499]: I1101 00:35:36.540904 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/376113d4-ed9e-4de6-ab71-daa7d077b967-whisker-backend-key-pair\") pod \"whisker-59bc9c756c-z94hk\" (UID: \"376113d4-ed9e-4de6-ab71-daa7d077b967\") " pod="calico-system/whisker-59bc9c756c-z94hk" Nov 1 00:35:36.540972 kubelet[2499]: I1101 00:35:36.540947 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cxdk\" (UniqueName: \"kubernetes.io/projected/4ca70b04-3681-42b1-b3b8-746e67038cfe-kube-api-access-7cxdk\") pod \"calico-kube-controllers-64f94746cd-5r8bx\" (UID: \"4ca70b04-3681-42b1-b3b8-746e67038cfe\") " pod="calico-system/calico-kube-controllers-64f94746cd-5r8bx" Nov 1 00:35:36.541096 kubelet[2499]: I1101 00:35:36.540970 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/331a1960-88ad-4608-9f70-708ee400d030-goldmane-ca-bundle\") pod \"goldmane-666569f655-rlz6p\" (UID: \"331a1960-88ad-4608-9f70-708ee400d030\") " pod="calico-system/goldmane-666569f655-rlz6p" Nov 1 00:35:36.541096 kubelet[2499]: I1101 00:35:36.541003 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4vct\" (UniqueName: \"kubernetes.io/projected/42b9da1b-c5f5-468c-9b0b-bd955feccb34-kube-api-access-t4vct\") pod \"calico-apiserver-68fc7bb9b7-c7qgt\" (UID: \"42b9da1b-c5f5-468c-9b0b-bd955feccb34\") " pod="calico-apiserver/calico-apiserver-68fc7bb9b7-c7qgt" Nov 1 00:35:36.541096 kubelet[2499]: I1101 00:35:36.541027 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331a1960-88ad-4608-9f70-708ee400d030-config\") pod \"goldmane-666569f655-rlz6p\" (UID: \"331a1960-88ad-4608-9f70-708ee400d030\") " pod="calico-system/goldmane-666569f655-rlz6p" Nov 1 00:35:36.541096 kubelet[2499]: I1101 00:35:36.541041 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/331a1960-88ad-4608-9f70-708ee400d030-goldmane-key-pair\") pod \"goldmane-666569f655-rlz6p\" (UID: \"331a1960-88ad-4608-9f70-708ee400d030\") " pod="calico-system/goldmane-666569f655-rlz6p" Nov 1 00:35:36.683042 kubelet[2499]: E1101 00:35:36.682979 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:36.683702 containerd[1468]: time="2025-11-01T00:35:36.683662296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ct4jw,Uid:02d687a0-8306-485c-897b-e3fc603e4632,Namespace:kube-system,Attempt:0,}" Nov 1 00:35:36.692464 kubelet[2499]: E1101 00:35:36.692441 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:36.693720 containerd[1468]: time="2025-11-01T00:35:36.693665373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g4fmz,Uid:8731e9b0-7c90-4504-b50f-7b034a8b8a07,Namespace:kube-system,Attempt:0,}" Nov 1 00:35:36.699031 containerd[1468]: time="2025-11-01T00:35:36.698988330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64f94746cd-5r8bx,Uid:4ca70b04-3681-42b1-b3b8-746e67038cfe,Namespace:calico-system,Attempt:0,}" Nov 1 00:35:36.704625 containerd[1468]: time="2025-11-01T00:35:36.704557321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68fc7bb9b7-tvhcs,Uid:d57a8509-e37c-4d69-93aa-35fdadef5de6,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:35:36.710553 containerd[1468]: time="2025-11-01T00:35:36.710499601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rlz6p,Uid:331a1960-88ad-4608-9f70-708ee400d030,Namespace:calico-system,Attempt:0,}" Nov 1 00:35:36.717639 containerd[1468]: time="2025-11-01T00:35:36.716536085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68fc7bb9b7-c7qgt,Uid:42b9da1b-c5f5-468c-9b0b-bd955feccb34,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:35:36.724632 containerd[1468]: time="2025-11-01T00:35:36.724564717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59bc9c756c-z94hk,Uid:376113d4-ed9e-4de6-ab71-daa7d077b967,Namespace:calico-system,Attempt:0,}" Nov 1 00:35:36.811796 containerd[1468]: time="2025-11-01T00:35:36.811735665Z" level=error msg="Failed to destroy network for sandbox \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.814613 containerd[1468]: time="2025-11-01T00:35:36.812848078Z" level=error msg="encountered an error cleaning up failed sandbox \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.823248 containerd[1468]: time="2025-11-01T00:35:36.823203957Z" level=error msg="Failed to destroy network for sandbox \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.823573 containerd[1468]: time="2025-11-01T00:35:36.823541210Z" level=error msg="encountered an error cleaning up failed sandbox \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.823624 containerd[1468]: time="2025-11-01T00:35:36.823590079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64f94746cd-5r8bx,Uid:4ca70b04-3681-42b1-b3b8-746e67038cfe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.828324 containerd[1468]: time="2025-11-01T00:35:36.828277994Z" level=error msg="Failed to destroy network for sandbox \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.828874 containerd[1468]: time="2025-11-01T00:35:36.828851573Z" level=error msg="encountered an error cleaning up failed sandbox \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.828968 containerd[1468]: time="2025-11-01T00:35:36.828947019Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g4fmz,Uid:8731e9b0-7c90-4504-b50f-7b034a8b8a07,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.846747 containerd[1468]: time="2025-11-01T00:35:36.846692409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ct4jw,Uid:02d687a0-8306-485c-897b-e3fc603e4632,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.850104 containerd[1468]: time="2025-11-01T00:35:36.850063833Z" level=error msg="Failed to destroy network for sandbox \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.850430 kubelet[2499]: E1101 00:35:36.850382 2499 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.850493 kubelet[2499]: E1101 00:35:36.850456 2499 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ct4jw" Nov 1 00:35:36.850493 kubelet[2499]: E1101 00:35:36.850477 2499 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ct4jw" Nov 1 00:35:36.850542 kubelet[2499]: E1101 00:35:36.850513 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ct4jw_kube-system(02d687a0-8306-485c-897b-e3fc603e4632)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ct4jw_kube-system(02d687a0-8306-485c-897b-e3fc603e4632)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ct4jw" podUID="02d687a0-8306-485c-897b-e3fc603e4632" Nov 1 00:35:36.850872 containerd[1468]: time="2025-11-01T00:35:36.850713563Z" level=error msg="encountered an error cleaning up failed sandbox \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.850872 containerd[1468]: time="2025-11-01T00:35:36.850804260Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68fc7bb9b7-tvhcs,Uid:d57a8509-e37c-4d69-93aa-35fdadef5de6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.851355 kubelet[2499]: E1101 00:35:36.851328 2499 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.851418 kubelet[2499]: E1101 00:35:36.851360 2499 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-tvhcs" Nov 1 00:35:36.851418 kubelet[2499]: E1101 00:35:36.851379 2499 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-tvhcs" Nov 1 00:35:36.851418 kubelet[2499]: E1101 00:35:36.851406 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68fc7bb9b7-tvhcs_calico-apiserver(d57a8509-e37c-4d69-93aa-35fdadef5de6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68fc7bb9b7-tvhcs_calico-apiserver(d57a8509-e37c-4d69-93aa-35fdadef5de6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-tvhcs" podUID="d57a8509-e37c-4d69-93aa-35fdadef5de6" Nov 1 00:35:36.851516 kubelet[2499]: E1101 00:35:36.851436 2499 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.851516 kubelet[2499]: E1101 00:35:36.851451 2499 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64f94746cd-5r8bx" Nov 1 00:35:36.851516 kubelet[2499]: E1101 00:35:36.851464 2499 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64f94746cd-5r8bx" Nov 1 00:35:36.851585 kubelet[2499]: E1101 00:35:36.851485 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64f94746cd-5r8bx_calico-system(4ca70b04-3681-42b1-b3b8-746e67038cfe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64f94746cd-5r8bx_calico-system(4ca70b04-3681-42b1-b3b8-746e67038cfe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64f94746cd-5r8bx" podUID="4ca70b04-3681-42b1-b3b8-746e67038cfe" Nov 1 00:35:36.851585 kubelet[2499]: E1101 00:35:36.851508 2499 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.851585 kubelet[2499]: E1101 00:35:36.851522 2499 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g4fmz" Nov 1 00:35:36.851753 kubelet[2499]: E1101 00:35:36.851535 2499 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g4fmz" Nov 1 00:35:36.851753 kubelet[2499]: E1101 00:35:36.851565 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-g4fmz_kube-system(8731e9b0-7c90-4504-b50f-7b034a8b8a07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-g4fmz_kube-system(8731e9b0-7c90-4504-b50f-7b034a8b8a07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-g4fmz" podUID="8731e9b0-7c90-4504-b50f-7b034a8b8a07" Nov 1 00:35:36.869714 containerd[1468]: time="2025-11-01T00:35:36.869668516Z" level=error msg="Failed to destroy network for sandbox \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.870661 containerd[1468]: time="2025-11-01T00:35:36.870620925Z" level=error msg="encountered an error cleaning up failed sandbox \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.870715 containerd[1468]: time="2025-11-01T00:35:36.870683149Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68fc7bb9b7-c7qgt,Uid:42b9da1b-c5f5-468c-9b0b-bd955feccb34,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.871126 kubelet[2499]: E1101 00:35:36.870849 2499 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.871126 kubelet[2499]: E1101 00:35:36.870895 2499 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-c7qgt" Nov 1 00:35:36.871126 kubelet[2499]: E1101 00:35:36.870915 2499 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-c7qgt" Nov 1 00:35:36.871220 kubelet[2499]: E1101 00:35:36.870956 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68fc7bb9b7-c7qgt_calico-apiserver(42b9da1b-c5f5-468c-9b0b-bd955feccb34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68fc7bb9b7-c7qgt_calico-apiserver(42b9da1b-c5f5-468c-9b0b-bd955feccb34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-c7qgt" podUID="42b9da1b-c5f5-468c-9b0b-bd955feccb34" Nov 1 00:35:36.873570 containerd[1468]: time="2025-11-01T00:35:36.873510739Z" level=error msg="Failed to destroy network for sandbox \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.874130 containerd[1468]: time="2025-11-01T00:35:36.874096531Z" level=error msg="encountered an error cleaning up failed sandbox \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.874274 containerd[1468]: time="2025-11-01T00:35:36.874153285Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rlz6p,Uid:331a1960-88ad-4608-9f70-708ee400d030,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.874370 kubelet[2499]: E1101 00:35:36.874327 2499 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.874418 kubelet[2499]: E1101 00:35:36.874389 2499 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-rlz6p" Nov 1 00:35:36.874449 kubelet[2499]: E1101 00:35:36.874414 2499 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-rlz6p" Nov 1 00:35:36.874476 kubelet[2499]: E1101 00:35:36.874457 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-rlz6p_calico-system(331a1960-88ad-4608-9f70-708ee400d030)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-rlz6p_calico-system(331a1960-88ad-4608-9f70-708ee400d030)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-rlz6p" podUID="331a1960-88ad-4608-9f70-708ee400d030" Nov 1 00:35:36.878333 containerd[1468]: time="2025-11-01T00:35:36.878277459Z" level=error msg="Failed to destroy network for sandbox \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.878623 containerd[1468]: time="2025-11-01T00:35:36.878581680Z" level=error msg="encountered an error cleaning up failed sandbox \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.878662 containerd[1468]: time="2025-11-01T00:35:36.878632184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59bc9c756c-z94hk,Uid:376113d4-ed9e-4de6-ab71-daa7d077b967,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.878789 kubelet[2499]: E1101 00:35:36.878765 2499 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:36.878844 kubelet[2499]: E1101 00:35:36.878795 2499 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59bc9c756c-z94hk" Nov 1 00:35:36.878844 kubelet[2499]: E1101 00:35:36.878811 2499 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59bc9c756c-z94hk" Nov 1 00:35:36.878892 kubelet[2499]: E1101 00:35:36.878842 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-59bc9c756c-z94hk_calico-system(376113d4-ed9e-4de6-ab71-daa7d077b967)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-59bc9c756c-z94hk_calico-system(376113d4-ed9e-4de6-ab71-daa7d077b967)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59bc9c756c-z94hk" podUID="376113d4-ed9e-4de6-ab71-daa7d077b967" Nov 1 00:35:37.017098 systemd[1]: Created slice kubepods-besteffort-pod31c28b53_e76c_45d5_b66c_cb1d82d504b6.slice - libcontainer container kubepods-besteffort-pod31c28b53_e76c_45d5_b66c_cb1d82d504b6.slice. Nov 1 00:35:37.019847 containerd[1468]: time="2025-11-01T00:35:37.019816100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jzfns,Uid:31c28b53-e76c-45d5-b66c-cb1d82d504b6,Namespace:calico-system,Attempt:0,}" Nov 1 00:35:37.072839 kubelet[2499]: I1101 00:35:37.072379 2499 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Nov 1 00:35:37.074275 kubelet[2499]: I1101 00:35:37.073902 2499 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Nov 1 00:35:37.075638 kubelet[2499]: I1101 00:35:37.075583 2499 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Nov 1 00:35:37.078138 containerd[1468]: time="2025-11-01T00:35:37.077923989Z" level=info msg="StopPodSandbox for \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\"" Nov 1 00:35:37.078673 containerd[1468]: time="2025-11-01T00:35:37.078654478Z" level=info msg="StopPodSandbox for \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\"" Nov 1 00:35:37.078725 kubelet[2499]: E1101 00:35:37.078714 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:37.080219 containerd[1468]: time="2025-11-01T00:35:37.079034000Z" level=info msg="StopPodSandbox for \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\"" Nov 1 00:35:37.082287 containerd[1468]: time="2025-11-01T00:35:37.081829115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:35:37.085707 containerd[1468]: time="2025-11-01T00:35:37.082821379Z" level=error msg="Failed to destroy network for sandbox \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:37.085707 containerd[1468]: time="2025-11-01T00:35:37.083779479Z" level=info msg="StopPodSandbox for \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\"" Nov 1 00:35:37.085858 kubelet[2499]: I1101 00:35:37.083282 2499 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Nov 1 00:35:37.087829 containerd[1468]: time="2025-11-01T00:35:37.087786213Z" level=info msg="Ensure that sandbox fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20 in task-service has been cleanup successfully" Nov 1 00:35:37.088103 containerd[1468]: time="2025-11-01T00:35:37.088071962Z" level=info msg="Ensure that sandbox 50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d in task-service has been cleanup successfully" Nov 1 00:35:37.088513 containerd[1468]: time="2025-11-01T00:35:37.088477471Z" level=info msg="Ensure that sandbox 5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c in task-service has been cleanup successfully" Nov 1 00:35:37.088835 containerd[1468]: time="2025-11-01T00:35:37.088789728Z" level=error msg="encountered an error cleaning up failed sandbox \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:37.088866 containerd[1468]: time="2025-11-01T00:35:37.088846102Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jzfns,Uid:31c28b53-e76c-45d5-b66c-cb1d82d504b6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:37.089450 kubelet[2499]: E1101 00:35:37.089410 2499 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:37.089511 kubelet[2499]: E1101 00:35:37.089463 2499 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jzfns" Nov 1 00:35:37.089511 kubelet[2499]: E1101 00:35:37.089486 2499 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jzfns" Nov 1 00:35:37.089568 kubelet[2499]: E1101 00:35:37.089525 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jzfns_calico-system(31c28b53-e76c-45d5-b66c-cb1d82d504b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jzfns_calico-system(31c28b53-e76c-45d5-b66c-cb1d82d504b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jzfns" podUID="31c28b53-e76c-45d5-b66c-cb1d82d504b6" Nov 1 00:35:37.092205 kubelet[2499]: I1101 00:35:37.092184 2499 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Nov 1 00:35:37.093304 containerd[1468]: time="2025-11-01T00:35:37.093147751Z" level=info msg="StopPodSandbox for \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\"" Nov 1 00:35:37.097423 containerd[1468]: time="2025-11-01T00:35:37.097395300Z" level=info msg="Ensure that sandbox 2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588 in task-service has been cleanup successfully" Nov 1 00:35:37.098111 kubelet[2499]: I1101 00:35:37.098062 2499 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Nov 1 00:35:37.099650 containerd[1468]: time="2025-11-01T00:35:37.099588042Z" level=info msg="StopPodSandbox for \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\"" Nov 1 00:35:37.100210 containerd[1468]: time="2025-11-01T00:35:37.100173003Z" level=info msg="Ensure that sandbox e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a in task-service has been cleanup successfully" Nov 1 00:35:37.101079 containerd[1468]: time="2025-11-01T00:35:37.100949237Z" level=info msg="Ensure that sandbox 8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885 in task-service has been cleanup successfully" Nov 1 00:35:37.103247 kubelet[2499]: I1101 00:35:37.103223 2499 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Nov 1 00:35:37.104320 containerd[1468]: time="2025-11-01T00:35:37.103667181Z" level=info msg="StopPodSandbox for \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\"" Nov 1 00:35:37.104320 containerd[1468]: time="2025-11-01T00:35:37.104135926Z" level=info msg="Ensure that sandbox e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b in task-service has been cleanup successfully" Nov 1 00:35:37.144931 containerd[1468]: time="2025-11-01T00:35:37.144869279Z" level=error msg="StopPodSandbox for \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\" failed" error="failed to destroy network for sandbox \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:37.145231 kubelet[2499]: E1101 00:35:37.145165 2499 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Nov 1 00:35:37.145352 kubelet[2499]: E1101 00:35:37.145241 2499 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20"} Nov 1 00:35:37.145352 kubelet[2499]: E1101 00:35:37.145303 2499 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"376113d4-ed9e-4de6-ab71-daa7d077b967\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:35:37.145352 kubelet[2499]: E1101 00:35:37.145325 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"376113d4-ed9e-4de6-ab71-daa7d077b967\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59bc9c756c-z94hk" podUID="376113d4-ed9e-4de6-ab71-daa7d077b967" Nov 1 00:35:37.146241 containerd[1468]: time="2025-11-01T00:35:37.146204246Z" level=error msg="StopPodSandbox for \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\" failed" error="failed to destroy network for sandbox \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:37.147712 kubelet[2499]: E1101 00:35:37.147622 2499 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Nov 1 00:35:37.147712 kubelet[2499]: E1101 00:35:37.147649 2499 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b"} Nov 1 00:35:37.147712 kubelet[2499]: E1101 00:35:37.147670 2499 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8731e9b0-7c90-4504-b50f-7b034a8b8a07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:35:37.147712 kubelet[2499]: E1101 00:35:37.147689 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8731e9b0-7c90-4504-b50f-7b034a8b8a07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-g4fmz" podUID="8731e9b0-7c90-4504-b50f-7b034a8b8a07" Nov 1 00:35:37.147967 containerd[1468]: time="2025-11-01T00:35:37.147641151Z" level=error msg="StopPodSandbox for \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\" failed" error="failed to destroy network for sandbox \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:37.148541 kubelet[2499]: E1101 00:35:37.148503 2499 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Nov 1 00:35:37.148638 kubelet[2499]: E1101 00:35:37.148551 2499 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d"} Nov 1 00:35:37.150936 kubelet[2499]: E1101 00:35:37.148587 2499 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42b9da1b-c5f5-468c-9b0b-bd955feccb34\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:35:37.150936 kubelet[2499]: E1101 00:35:37.150338 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42b9da1b-c5f5-468c-9b0b-bd955feccb34\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-c7qgt" podUID="42b9da1b-c5f5-468c-9b0b-bd955feccb34" Nov 1 00:35:37.152499 containerd[1468]: time="2025-11-01T00:35:37.152451009Z" level=error msg="StopPodSandbox for \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\" failed" error="failed to destroy network for sandbox \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:37.152655 kubelet[2499]: E1101 00:35:37.152581 2499 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Nov 1 00:35:37.152655 kubelet[2499]: E1101 00:35:37.152619 2499 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a"} Nov 1 00:35:37.152655 kubelet[2499]: E1101 00:35:37.152640 2499 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4ca70b04-3681-42b1-b3b8-746e67038cfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:35:37.152793 kubelet[2499]: E1101 00:35:37.152657 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4ca70b04-3681-42b1-b3b8-746e67038cfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64f94746cd-5r8bx" podUID="4ca70b04-3681-42b1-b3b8-746e67038cfe" Nov 1 00:35:37.156995 containerd[1468]: time="2025-11-01T00:35:37.156919827Z" level=error msg="StopPodSandbox for \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\" failed" error="failed to destroy network for sandbox \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:37.158709 containerd[1468]: time="2025-11-01T00:35:37.157296553Z" level=error msg="StopPodSandbox for \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\" failed" error="failed to destroy network for sandbox \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:37.158795 kubelet[2499]: E1101 00:35:37.158761 2499 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Nov 1 00:35:37.158830 kubelet[2499]: E1101 00:35:37.158813 2499 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588"} Nov 1 00:35:37.158853 kubelet[2499]: E1101 00:35:37.158835 2499 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"331a1960-88ad-4608-9f70-708ee400d030\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:35:37.158901 kubelet[2499]: E1101 00:35:37.158865 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"331a1960-88ad-4608-9f70-708ee400d030\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-rlz6p" podUID="331a1960-88ad-4608-9f70-708ee400d030" Nov 1 00:35:37.158988 kubelet[2499]: E1101 00:35:37.158954 2499 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Nov 1 00:35:37.159019 kubelet[2499]: E1101 00:35:37.158987 2499 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885"} Nov 1 00:35:37.159019 kubelet[2499]: E1101 00:35:37.159007 2499 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d57a8509-e37c-4d69-93aa-35fdadef5de6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:35:37.159089 kubelet[2499]: E1101 00:35:37.159029 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d57a8509-e37c-4d69-93aa-35fdadef5de6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-tvhcs" podUID="d57a8509-e37c-4d69-93aa-35fdadef5de6" Nov 1 00:35:37.159339 containerd[1468]: time="2025-11-01T00:35:37.159295476Z" level=error msg="StopPodSandbox for \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\" failed" error="failed to destroy network for sandbox \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:37.159491 kubelet[2499]: E1101 00:35:37.159459 2499 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Nov 1 00:35:37.159529 kubelet[2499]: E1101 00:35:37.159494 2499 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c"} Nov 1 00:35:37.159529 kubelet[2499]: E1101 00:35:37.159518 2499 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"02d687a0-8306-485c-897b-e3fc603e4632\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:35:37.159614 kubelet[2499]: E1101 00:35:37.159536 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"02d687a0-8306-485c-897b-e3fc603e4632\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ct4jw" podUID="02d687a0-8306-485c-897b-e3fc603e4632" Nov 1 00:35:37.337156 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c-shm.mount: Deactivated successfully. Nov 1 00:35:38.105931 kubelet[2499]: I1101 00:35:38.105888 2499 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Nov 1 00:35:38.106490 containerd[1468]: time="2025-11-01T00:35:38.106452009Z" level=info msg="StopPodSandbox for \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\"" Nov 1 00:35:38.106864 containerd[1468]: time="2025-11-01T00:35:38.106630539Z" level=info msg="Ensure that sandbox ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db in task-service has been cleanup successfully" Nov 1 00:35:38.131120 containerd[1468]: time="2025-11-01T00:35:38.131072637Z" level=error msg="StopPodSandbox for \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\" failed" error="failed to destroy network for sandbox \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:35:38.131330 kubelet[2499]: E1101 00:35:38.131284 2499 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Nov 1 00:35:38.131405 kubelet[2499]: E1101 00:35:38.131342 2499 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db"} Nov 1 00:35:38.131405 kubelet[2499]: E1101 00:35:38.131378 2499 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31c28b53-e76c-45d5-b66c-cb1d82d504b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:35:38.131484 kubelet[2499]: E1101 00:35:38.131402 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31c28b53-e76c-45d5-b66c-cb1d82d504b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jzfns" podUID="31c28b53-e76c-45d5-b66c-cb1d82d504b6" Nov 1 00:35:43.912355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2355163596.mount: Deactivated successfully. Nov 1 00:35:45.587153 containerd[1468]: time="2025-11-01T00:35:45.587097726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:35:45.588169 containerd[1468]: time="2025-11-01T00:35:45.587845131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 00:35:45.589368 containerd[1468]: time="2025-11-01T00:35:45.589307549Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:35:45.592192 containerd[1468]: time="2025-11-01T00:35:45.592157036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:35:45.593778 containerd[1468]: time="2025-11-01T00:35:45.593698913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.511827589s" Nov 1 00:35:45.593778 containerd[1468]: time="2025-11-01T00:35:45.593738677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:35:45.603429 containerd[1468]: time="2025-11-01T00:35:45.603392070Z" level=info msg="CreateContainer within sandbox \"bd185ea133c9383a18d8c1a1378121ab196b6a3a8d9c00e08cd9764aa59ce5be\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:35:45.621574 containerd[1468]: time="2025-11-01T00:35:45.621527818Z" level=info msg="CreateContainer within sandbox \"bd185ea133c9383a18d8c1a1378121ab196b6a3a8d9c00e08cd9764aa59ce5be\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bb8e6ce5fcd51f9257aaae338a0348c14235123c65bb6bd05878b355a081b20a\"" Nov 1 00:35:45.622073 containerd[1468]: time="2025-11-01T00:35:45.622037148Z" level=info msg="StartContainer for \"bb8e6ce5fcd51f9257aaae338a0348c14235123c65bb6bd05878b355a081b20a\"" Nov 1 00:35:45.696732 systemd[1]: Started cri-containerd-bb8e6ce5fcd51f9257aaae338a0348c14235123c65bb6bd05878b355a081b20a.scope - libcontainer container bb8e6ce5fcd51f9257aaae338a0348c14235123c65bb6bd05878b355a081b20a. Nov 1 00:35:45.743194 containerd[1468]: time="2025-11-01T00:35:45.743032189Z" level=info msg="StartContainer for \"bb8e6ce5fcd51f9257aaae338a0348c14235123c65bb6bd05878b355a081b20a\" returns successfully" Nov 1 00:35:45.819635 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:35:45.819746 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:35:45.910309 containerd[1468]: time="2025-11-01T00:35:45.910263671Z" level=info msg="StopPodSandbox for \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\"" Nov 1 00:35:46.075193 containerd[1468]: 2025-11-01 00:35:45.997 [INFO][3833] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Nov 1 00:35:46.075193 containerd[1468]: 2025-11-01 00:35:45.998 [INFO][3833] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" iface="eth0" netns="/var/run/netns/cni-a7d4320b-a5c3-8533-69b3-322f242afc10" Nov 1 00:35:46.075193 containerd[1468]: 2025-11-01 00:35:45.999 [INFO][3833] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" iface="eth0" netns="/var/run/netns/cni-a7d4320b-a5c3-8533-69b3-322f242afc10" Nov 1 00:35:46.075193 containerd[1468]: 2025-11-01 00:35:46.000 [INFO][3833] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" iface="eth0" netns="/var/run/netns/cni-a7d4320b-a5c3-8533-69b3-322f242afc10" Nov 1 00:35:46.075193 containerd[1468]: 2025-11-01 00:35:46.000 [INFO][3833] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Nov 1 00:35:46.075193 containerd[1468]: 2025-11-01 00:35:46.000 [INFO][3833] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Nov 1 00:35:46.075193 containerd[1468]: 2025-11-01 00:35:46.059 [INFO][3842] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" HandleID="k8s-pod-network.fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Workload="localhost-k8s-whisker--59bc9c756c--z94hk-eth0" Nov 1 00:35:46.075193 containerd[1468]: 2025-11-01 00:35:46.060 [INFO][3842] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:35:46.075193 containerd[1468]: 2025-11-01 00:35:46.060 [INFO][3842] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:35:46.075193 containerd[1468]: 2025-11-01 00:35:46.066 [WARNING][3842] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" HandleID="k8s-pod-network.fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Workload="localhost-k8s-whisker--59bc9c756c--z94hk-eth0" Nov 1 00:35:46.075193 containerd[1468]: 2025-11-01 00:35:46.066 [INFO][3842] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" HandleID="k8s-pod-network.fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Workload="localhost-k8s-whisker--59bc9c756c--z94hk-eth0" Nov 1 00:35:46.075193 containerd[1468]: 2025-11-01 00:35:46.068 [INFO][3842] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:35:46.075193 containerd[1468]: 2025-11-01 00:35:46.072 [INFO][3833] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Nov 1 00:35:46.075609 containerd[1468]: time="2025-11-01T00:35:46.075349648Z" level=info msg="TearDown network for sandbox \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\" successfully" Nov 1 00:35:46.075609 containerd[1468]: time="2025-11-01T00:35:46.075387121Z" level=info msg="StopPodSandbox for \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\" returns successfully" Nov 1 00:35:46.123900 kubelet[2499]: E1101 00:35:46.123869 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:46.201053 kubelet[2499]: I1101 00:35:46.200451 2499 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6zbx\" (UniqueName: \"kubernetes.io/projected/376113d4-ed9e-4de6-ab71-daa7d077b967-kube-api-access-g6zbx\") pod \"376113d4-ed9e-4de6-ab71-daa7d077b967\" (UID: \"376113d4-ed9e-4de6-ab71-daa7d077b967\") " Nov 1 00:35:46.201053 kubelet[2499]: I1101 00:35:46.200494 2499 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/376113d4-ed9e-4de6-ab71-daa7d077b967-whisker-backend-key-pair\") pod \"376113d4-ed9e-4de6-ab71-daa7d077b967\" (UID: \"376113d4-ed9e-4de6-ab71-daa7d077b967\") " Nov 1 00:35:46.201053 kubelet[2499]: I1101 00:35:46.200522 2499 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/376113d4-ed9e-4de6-ab71-daa7d077b967-whisker-ca-bundle\") pod \"376113d4-ed9e-4de6-ab71-daa7d077b967\" (UID: \"376113d4-ed9e-4de6-ab71-daa7d077b967\") " Nov 1 00:35:46.201053 kubelet[2499]: I1101 00:35:46.200964 2499 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/376113d4-ed9e-4de6-ab71-daa7d077b967-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "376113d4-ed9e-4de6-ab71-daa7d077b967" (UID: "376113d4-ed9e-4de6-ab71-daa7d077b967"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:35:46.205195 kubelet[2499]: I1101 00:35:46.205164 2499 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/376113d4-ed9e-4de6-ab71-daa7d077b967-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "376113d4-ed9e-4de6-ab71-daa7d077b967" (UID: "376113d4-ed9e-4de6-ab71-daa7d077b967"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:35:46.205338 kubelet[2499]: I1101 00:35:46.205183 2499 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/376113d4-ed9e-4de6-ab71-daa7d077b967-kube-api-access-g6zbx" (OuterVolumeSpecName: "kube-api-access-g6zbx") pod "376113d4-ed9e-4de6-ab71-daa7d077b967" (UID: "376113d4-ed9e-4de6-ab71-daa7d077b967"). InnerVolumeSpecName "kube-api-access-g6zbx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:35:46.301690 kubelet[2499]: I1101 00:35:46.301639 2499 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/376113d4-ed9e-4de6-ab71-daa7d077b967-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 1 00:35:46.301690 kubelet[2499]: I1101 00:35:46.301662 2499 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/376113d4-ed9e-4de6-ab71-daa7d077b967-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 1 00:35:46.302430 kubelet[2499]: I1101 00:35:46.302405 2499 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g6zbx\" (UniqueName: \"kubernetes.io/projected/376113d4-ed9e-4de6-ab71-daa7d077b967-kube-api-access-g6zbx\") on node \"localhost\" DevicePath \"\"" Nov 1 00:35:46.429390 systemd[1]: Removed slice kubepods-besteffort-pod376113d4_ed9e_4de6_ab71_daa7d077b967.slice - libcontainer container kubepods-besteffort-pod376113d4_ed9e_4de6_ab71_daa7d077b967.slice. Nov 1 00:35:46.604082 systemd[1]: run-netns-cni\x2da7d4320b\x2da5c3\x2d8533\x2d69b3\x2d322f242afc10.mount: Deactivated successfully. Nov 1 00:35:46.604199 systemd[1]: var-lib-kubelet-pods-376113d4\x2ded9e\x2d4de6\x2dab71\x2ddaa7d077b967-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg6zbx.mount: Deactivated successfully. Nov 1 00:35:46.604283 systemd[1]: var-lib-kubelet-pods-376113d4\x2ded9e\x2d4de6\x2dab71\x2ddaa7d077b967-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:35:46.888958 kubelet[2499]: I1101 00:35:46.888824 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jx6wz" podStartSLOduration=2.192788085 podStartE2EDuration="19.88797225s" podCreationTimestamp="2025-11-01 00:35:27 +0000 UTC" firstStartedPulling="2025-11-01 00:35:27.901236596 +0000 UTC m=+17.975846799" lastFinishedPulling="2025-11-01 00:35:45.596420751 +0000 UTC m=+35.671030964" observedRunningTime="2025-11-01 00:35:46.139299056 +0000 UTC m=+36.213909269" watchObservedRunningTime="2025-11-01 00:35:46.88797225 +0000 UTC m=+36.962582463" Nov 1 00:35:46.946394 systemd[1]: Created slice kubepods-besteffort-podd166a932_62b2_424c_af81_b672793d3ad2.slice - libcontainer container kubepods-besteffort-podd166a932_62b2_424c_af81_b672793d3ad2.slice. Nov 1 00:35:47.106302 kubelet[2499]: I1101 00:35:47.106239 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d166a932-62b2-424c-af81-b672793d3ad2-whisker-backend-key-pair\") pod \"whisker-7f6ff4bc47-cjjhn\" (UID: \"d166a932-62b2-424c-af81-b672793d3ad2\") " pod="calico-system/whisker-7f6ff4bc47-cjjhn" Nov 1 00:35:47.106302 kubelet[2499]: I1101 00:35:47.106287 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d166a932-62b2-424c-af81-b672793d3ad2-whisker-ca-bundle\") pod \"whisker-7f6ff4bc47-cjjhn\" (UID: \"d166a932-62b2-424c-af81-b672793d3ad2\") " pod="calico-system/whisker-7f6ff4bc47-cjjhn" Nov 1 00:35:47.106302 kubelet[2499]: I1101 00:35:47.106305 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmc6s\" (UniqueName: \"kubernetes.io/projected/d166a932-62b2-424c-af81-b672793d3ad2-kube-api-access-mmc6s\") pod \"whisker-7f6ff4bc47-cjjhn\" (UID: \"d166a932-62b2-424c-af81-b672793d3ad2\") " pod="calico-system/whisker-7f6ff4bc47-cjjhn" Nov 1 00:35:47.250246 containerd[1468]: time="2025-11-01T00:35:47.250196160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f6ff4bc47-cjjhn,Uid:d166a932-62b2-424c-af81-b672793d3ad2,Namespace:calico-system,Attempt:0,}" Nov 1 00:35:47.444484 systemd[1]: Started sshd@7-10.0.0.5:22-10.0.0.1:33650.service - OpenSSH per-connection server daemon (10.0.0.1:33650). Nov 1 00:35:47.625930 sshd[3970]: Accepted publickey for core from 10.0.0.1 port 33650 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:35:47.734121 sshd[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:35:47.738934 systemd-logind[1455]: New session 8 of user core. Nov 1 00:35:47.752718 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 00:35:47.889012 sshd[3970]: pam_unix(sshd:session): session closed for user core Nov 1 00:35:47.893837 systemd[1]: sshd@7-10.0.0.5:22-10.0.0.1:33650.service: Deactivated successfully. Nov 1 00:35:47.895996 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:35:47.897168 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:35:47.898090 systemd-logind[1455]: Removed session 8. Nov 1 00:35:47.961507 systemd-networkd[1389]: calia8ae5718eb9: Link UP Nov 1 00:35:47.962214 systemd-networkd[1389]: calia8ae5718eb9: Gained carrier Nov 1 00:35:47.976520 containerd[1468]: 2025-11-01 00:35:47.888 [INFO][3983] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:35:47.976520 containerd[1468]: 2025-11-01 00:35:47.899 [INFO][3983] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7f6ff4bc47--cjjhn-eth0 whisker-7f6ff4bc47- calico-system d166a932-62b2-424c-af81-b672793d3ad2 945 0 2025-11-01 00:35:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f6ff4bc47 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7f6ff4bc47-cjjhn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia8ae5718eb9 [] [] }} ContainerID="0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e" Namespace="calico-system" Pod="whisker-7f6ff4bc47-cjjhn" WorkloadEndpoint="localhost-k8s-whisker--7f6ff4bc47--cjjhn-" Nov 1 00:35:47.976520 containerd[1468]: 2025-11-01 00:35:47.899 [INFO][3983] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e" Namespace="calico-system" Pod="whisker-7f6ff4bc47-cjjhn" WorkloadEndpoint="localhost-k8s-whisker--7f6ff4bc47--cjjhn-eth0" Nov 1 00:35:47.976520 containerd[1468]: 2025-11-01 00:35:47.925 [INFO][4001] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e" HandleID="k8s-pod-network.0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e" Workload="localhost-k8s-whisker--7f6ff4bc47--cjjhn-eth0" Nov 1 00:35:47.976520 containerd[1468]: 2025-11-01 00:35:47.925 [INFO][4001] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e" HandleID="k8s-pod-network.0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e" Workload="localhost-k8s-whisker--7f6ff4bc47--cjjhn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001393f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7f6ff4bc47-cjjhn", "timestamp":"2025-11-01 00:35:47.925249555 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:35:47.976520 containerd[1468]: 2025-11-01 00:35:47.925 [INFO][4001] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:35:47.976520 containerd[1468]: 2025-11-01 00:35:47.925 [INFO][4001] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:35:47.976520 containerd[1468]: 2025-11-01 00:35:47.925 [INFO][4001] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:35:47.976520 containerd[1468]: 2025-11-01 00:35:47.932 [INFO][4001] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e" host="localhost" Nov 1 00:35:47.976520 containerd[1468]: 2025-11-01 00:35:47.936 [INFO][4001] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:35:47.976520 containerd[1468]: 2025-11-01 00:35:47.939 [INFO][4001] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:35:47.976520 containerd[1468]: 2025-11-01 00:35:47.941 [INFO][4001] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:35:47.976520 containerd[1468]: 2025-11-01 00:35:47.942 [INFO][4001] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:35:47.976520 containerd[1468]: 2025-11-01 00:35:47.942 [INFO][4001] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e" host="localhost" Nov 1 00:35:47.976520 containerd[1468]: 2025-11-01 00:35:47.943 [INFO][4001] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e Nov 1 00:35:47.976520 containerd[1468]: 2025-11-01 00:35:47.947 [INFO][4001] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e" host="localhost" Nov 1 00:35:47.976520 containerd[1468]: 2025-11-01 00:35:47.951 [INFO][4001] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e" host="localhost" Nov 1 00:35:47.976520 containerd[1468]: 2025-11-01 00:35:47.951 [INFO][4001] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e" host="localhost" Nov 1 00:35:47.976520 containerd[1468]: 2025-11-01 00:35:47.951 [INFO][4001] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:35:47.976520 containerd[1468]: 2025-11-01 00:35:47.951 [INFO][4001] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e" HandleID="k8s-pod-network.0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e" Workload="localhost-k8s-whisker--7f6ff4bc47--cjjhn-eth0" Nov 1 00:35:47.977097 containerd[1468]: 2025-11-01 00:35:47.954 [INFO][3983] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e" Namespace="calico-system" Pod="whisker-7f6ff4bc47-cjjhn" WorkloadEndpoint="localhost-k8s-whisker--7f6ff4bc47--cjjhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7f6ff4bc47--cjjhn-eth0", GenerateName:"whisker-7f6ff4bc47-", Namespace:"calico-system", SelfLink:"", UID:"d166a932-62b2-424c-af81-b672793d3ad2", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f6ff4bc47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7f6ff4bc47-cjjhn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia8ae5718eb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:35:47.977097 containerd[1468]: 2025-11-01 00:35:47.954 [INFO][3983] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e" Namespace="calico-system" Pod="whisker-7f6ff4bc47-cjjhn" WorkloadEndpoint="localhost-k8s-whisker--7f6ff4bc47--cjjhn-eth0" Nov 1 00:35:47.977097 containerd[1468]: 2025-11-01 00:35:47.954 [INFO][3983] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8ae5718eb9 ContainerID="0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e" Namespace="calico-system" Pod="whisker-7f6ff4bc47-cjjhn" WorkloadEndpoint="localhost-k8s-whisker--7f6ff4bc47--cjjhn-eth0" Nov 1 00:35:47.977097 containerd[1468]: 2025-11-01 00:35:47.961 [INFO][3983] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e" Namespace="calico-system" Pod="whisker-7f6ff4bc47-cjjhn" WorkloadEndpoint="localhost-k8s-whisker--7f6ff4bc47--cjjhn-eth0" Nov 1 00:35:47.977097 containerd[1468]: 2025-11-01 00:35:47.962 [INFO][3983] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e" Namespace="calico-system" Pod="whisker-7f6ff4bc47-cjjhn" WorkloadEndpoint="localhost-k8s-whisker--7f6ff4bc47--cjjhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7f6ff4bc47--cjjhn-eth0", GenerateName:"whisker-7f6ff4bc47-", Namespace:"calico-system", SelfLink:"", UID:"d166a932-62b2-424c-af81-b672793d3ad2", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f6ff4bc47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e", Pod:"whisker-7f6ff4bc47-cjjhn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia8ae5718eb9", MAC:"ea:59:cb:eb:ab:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:35:47.977097 containerd[1468]: 2025-11-01 00:35:47.971 [INFO][3983] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e" Namespace="calico-system" Pod="whisker-7f6ff4bc47-cjjhn" WorkloadEndpoint="localhost-k8s-whisker--7f6ff4bc47--cjjhn-eth0" Nov 1 00:35:48.004139 containerd[1468]: time="2025-11-01T00:35:48.004004245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:35:48.004139 containerd[1468]: time="2025-11-01T00:35:48.004110261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:35:48.005198 containerd[1468]: time="2025-11-01T00:35:48.004272885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:48.005198 containerd[1468]: time="2025-11-01T00:35:48.004896111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:48.014134 kubelet[2499]: I1101 00:35:48.014102 2499 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="376113d4-ed9e-4de6-ab71-daa7d077b967" path="/var/lib/kubelet/pods/376113d4-ed9e-4de6-ab71-daa7d077b967/volumes" Nov 1 00:35:48.033755 systemd[1]: Started cri-containerd-0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e.scope - libcontainer container 0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e. Nov 1 00:35:48.046029 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:35:48.068660 containerd[1468]: time="2025-11-01T00:35:48.068565396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f6ff4bc47-cjjhn,Uid:d166a932-62b2-424c-af81-b672793d3ad2,Namespace:calico-system,Attempt:0,} returns sandbox id \"0207fd46b77592671c4511cc620fa5a5807d1d0130cf8d8099edf07693701e7e\"" Nov 1 00:35:48.073710 containerd[1468]: time="2025-11-01T00:35:48.073684269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:35:48.373012 kubelet[2499]: I1101 00:35:48.372966 2499 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:35:48.373474 kubelet[2499]: E1101 00:35:48.373455 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:48.427808 containerd[1468]: time="2025-11-01T00:35:48.427750113Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:35:48.440623 containerd[1468]: time="2025-11-01T00:35:48.429331472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:35:48.440623 containerd[1468]: time="2025-11-01T00:35:48.429415514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:35:48.441116 kubelet[2499]: E1101 00:35:48.439176 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:35:48.441116 kubelet[2499]: E1101 00:35:48.439229 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:35:48.441198 kubelet[2499]: E1101 00:35:48.440359 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:79277034d2d74e8eb716aae70187f367,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mmc6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f6ff4bc47-cjjhn_calico-system(d166a932-62b2-424c-af81-b672793d3ad2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:35:48.444606 containerd[1468]: time="2025-11-01T00:35:48.444560416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:35:48.749298 containerd[1468]: time="2025-11-01T00:35:48.749216072Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:35:48.750551 containerd[1468]: time="2025-11-01T00:35:48.750510848Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:35:48.750634 containerd[1468]: time="2025-11-01T00:35:48.750578108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:35:48.750769 kubelet[2499]: E1101 00:35:48.750740 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:35:48.750862 kubelet[2499]: E1101 00:35:48.750776 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:35:48.750928 kubelet[2499]: E1101 00:35:48.750886 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmc6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f6ff4bc47-cjjhn_calico-system(d166a932-62b2-424c-af81-b672793d3ad2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:35:48.752555 kubelet[2499]: E1101 00:35:48.752500 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f6ff4bc47-cjjhn" podUID="d166a932-62b2-424c-af81-b672793d3ad2" Nov 1 00:35:49.012741 containerd[1468]: time="2025-11-01T00:35:49.012400202Z" level=info msg="StopPodSandbox for \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\"" Nov 1 00:35:49.012741 containerd[1468]: time="2025-11-01T00:35:49.012531475Z" level=info msg="StopPodSandbox for \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\"" Nov 1 00:35:49.092205 containerd[1468]: 2025-11-01 00:35:49.057 [INFO][4146] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Nov 1 00:35:49.092205 containerd[1468]: 2025-11-01 00:35:49.057 [INFO][4146] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" iface="eth0" netns="/var/run/netns/cni-81e138f6-5ed3-dbe8-447e-3a8354ad5489" Nov 1 00:35:49.092205 containerd[1468]: 2025-11-01 00:35:49.057 [INFO][4146] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" iface="eth0" netns="/var/run/netns/cni-81e138f6-5ed3-dbe8-447e-3a8354ad5489" Nov 1 00:35:49.092205 containerd[1468]: 2025-11-01 00:35:49.057 [INFO][4146] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" iface="eth0" netns="/var/run/netns/cni-81e138f6-5ed3-dbe8-447e-3a8354ad5489" Nov 1 00:35:49.092205 containerd[1468]: 2025-11-01 00:35:49.057 [INFO][4146] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Nov 1 00:35:49.092205 containerd[1468]: 2025-11-01 00:35:49.058 [INFO][4146] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Nov 1 00:35:49.092205 containerd[1468]: 2025-11-01 00:35:49.079 [INFO][4161] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" HandleID="k8s-pod-network.e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Workload="localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0" Nov 1 00:35:49.092205 containerd[1468]: 2025-11-01 00:35:49.079 [INFO][4161] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:35:49.092205 containerd[1468]: 2025-11-01 00:35:49.079 [INFO][4161] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:35:49.092205 containerd[1468]: 2025-11-01 00:35:49.084 [WARNING][4161] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" HandleID="k8s-pod-network.e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Workload="localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0" Nov 1 00:35:49.092205 containerd[1468]: 2025-11-01 00:35:49.084 [INFO][4161] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" HandleID="k8s-pod-network.e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Workload="localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0" Nov 1 00:35:49.092205 containerd[1468]: 2025-11-01 00:35:49.086 [INFO][4161] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:35:49.092205 containerd[1468]: 2025-11-01 00:35:49.088 [INFO][4146] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Nov 1 00:35:49.093037 containerd[1468]: time="2025-11-01T00:35:49.092381414Z" level=info msg="TearDown network for sandbox \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\" successfully" Nov 1 00:35:49.093037 containerd[1468]: time="2025-11-01T00:35:49.092406854Z" level=info msg="StopPodSandbox for \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\" returns successfully" Nov 1 00:35:49.093037 containerd[1468]: time="2025-11-01T00:35:49.093011623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64f94746cd-5r8bx,Uid:4ca70b04-3681-42b1-b3b8-746e67038cfe,Namespace:calico-system,Attempt:1,}" Nov 1 00:35:49.094860 systemd[1]: run-netns-cni\x2d81e138f6\x2d5ed3\x2ddbe8\x2d447e\x2d3a8354ad5489.mount: Deactivated successfully. Nov 1 00:35:49.099227 containerd[1468]: 2025-11-01 00:35:49.053 [INFO][4145] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Nov 1 00:35:49.099227 containerd[1468]: 2025-11-01 00:35:49.055 [INFO][4145] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" iface="eth0" netns="/var/run/netns/cni-8ca5a198-cd92-ac93-afae-b68543fefbd5" Nov 1 00:35:49.099227 containerd[1468]: 2025-11-01 00:35:49.055 [INFO][4145] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" iface="eth0" netns="/var/run/netns/cni-8ca5a198-cd92-ac93-afae-b68543fefbd5" Nov 1 00:35:49.099227 containerd[1468]: 2025-11-01 00:35:49.057 [INFO][4145] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" iface="eth0" netns="/var/run/netns/cni-8ca5a198-cd92-ac93-afae-b68543fefbd5" Nov 1 00:35:49.099227 containerd[1468]: 2025-11-01 00:35:49.057 [INFO][4145] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Nov 1 00:35:49.099227 containerd[1468]: 2025-11-01 00:35:49.057 [INFO][4145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Nov 1 00:35:49.099227 containerd[1468]: 2025-11-01 00:35:49.081 [INFO][4162] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" HandleID="k8s-pod-network.ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Workload="localhost-k8s-csi--node--driver--jzfns-eth0" Nov 1 00:35:49.099227 containerd[1468]: 2025-11-01 00:35:49.082 [INFO][4162] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:35:49.099227 containerd[1468]: 2025-11-01 00:35:49.086 [INFO][4162] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:35:49.099227 containerd[1468]: 2025-11-01 00:35:49.090 [WARNING][4162] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" HandleID="k8s-pod-network.ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Workload="localhost-k8s-csi--node--driver--jzfns-eth0" Nov 1 00:35:49.099227 containerd[1468]: 2025-11-01 00:35:49.090 [INFO][4162] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" HandleID="k8s-pod-network.ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Workload="localhost-k8s-csi--node--driver--jzfns-eth0" Nov 1 00:35:49.099227 containerd[1468]: 2025-11-01 00:35:49.091 [INFO][4162] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:35:49.099227 containerd[1468]: 2025-11-01 00:35:49.096 [INFO][4145] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Nov 1 00:35:49.099574 containerd[1468]: time="2025-11-01T00:35:49.099383170Z" level=info msg="TearDown network for sandbox \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\" successfully" Nov 1 00:35:49.099574 containerd[1468]: time="2025-11-01T00:35:49.099409480Z" level=info msg="StopPodSandbox for \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\" returns successfully" Nov 1 00:35:49.100084 containerd[1468]: time="2025-11-01T00:35:49.100057894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jzfns,Uid:31c28b53-e76c-45d5-b66c-cb1d82d504b6,Namespace:calico-system,Attempt:1,}" Nov 1 00:35:49.101450 systemd[1]: run-netns-cni\x2d8ca5a198\x2dcd92\x2dac93\x2dafae\x2db68543fefbd5.mount: Deactivated successfully. Nov 1 00:35:49.138188 kubelet[2499]: E1101 00:35:49.138055 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f6ff4bc47-cjjhn" podUID="d166a932-62b2-424c-af81-b672793d3ad2" Nov 1 00:35:49.203251 systemd-networkd[1389]: cali23728ae47bd: Link UP Nov 1 00:35:49.204883 systemd-networkd[1389]: cali23728ae47bd: Gained carrier Nov 1 00:35:49.217477 containerd[1468]: 2025-11-01 00:35:49.129 [INFO][4179] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:35:49.217477 containerd[1468]: 2025-11-01 00:35:49.140 [INFO][4179] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0 calico-kube-controllers-64f94746cd- calico-system 4ca70b04-3681-42b1-b3b8-746e67038cfe 999 0 2025-11-01 00:35:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64f94746cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-64f94746cd-5r8bx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali23728ae47bd [] [] }} ContainerID="86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0" Namespace="calico-system" Pod="calico-kube-controllers-64f94746cd-5r8bx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-" Nov 1 00:35:49.217477 containerd[1468]: 2025-11-01 00:35:49.140 [INFO][4179] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0" Namespace="calico-system" Pod="calico-kube-controllers-64f94746cd-5r8bx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0" Nov 1 00:35:49.217477 containerd[1468]: 2025-11-01 00:35:49.171 [INFO][4206] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0" HandleID="k8s-pod-network.86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0" Workload="localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0" Nov 1 00:35:49.217477 containerd[1468]: 2025-11-01 00:35:49.171 [INFO][4206] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0" HandleID="k8s-pod-network.86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0" Workload="localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135a80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-64f94746cd-5r8bx", "timestamp":"2025-11-01 00:35:49.171522593 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:35:49.217477 containerd[1468]: 2025-11-01 00:35:49.171 [INFO][4206] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:35:49.217477 containerd[1468]: 2025-11-01 00:35:49.172 [INFO][4206] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:35:49.217477 containerd[1468]: 2025-11-01 00:35:49.172 [INFO][4206] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:35:49.217477 containerd[1468]: 2025-11-01 00:35:49.178 [INFO][4206] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0" host="localhost" Nov 1 00:35:49.217477 containerd[1468]: 2025-11-01 00:35:49.182 [INFO][4206] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:35:49.217477 containerd[1468]: 2025-11-01 00:35:49.185 [INFO][4206] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:35:49.217477 containerd[1468]: 2025-11-01 00:35:49.188 [INFO][4206] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:35:49.217477 containerd[1468]: 2025-11-01 00:35:49.189 [INFO][4206] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:35:49.217477 containerd[1468]: 2025-11-01 00:35:49.189 [INFO][4206] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0" host="localhost" Nov 1 00:35:49.217477 containerd[1468]: 2025-11-01 00:35:49.190 [INFO][4206] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0 Nov 1 00:35:49.217477 containerd[1468]: 2025-11-01 00:35:49.193 [INFO][4206] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0" host="localhost" Nov 1 00:35:49.217477 containerd[1468]: 2025-11-01 00:35:49.198 [INFO][4206] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0" host="localhost" Nov 1 00:35:49.217477 containerd[1468]: 2025-11-01 00:35:49.198 [INFO][4206] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0" host="localhost" Nov 1 00:35:49.217477 containerd[1468]: 2025-11-01 00:35:49.198 [INFO][4206] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:35:49.217477 containerd[1468]: 2025-11-01 00:35:49.198 [INFO][4206] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0" HandleID="k8s-pod-network.86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0" Workload="localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0" Nov 1 00:35:49.218074 containerd[1468]: 2025-11-01 00:35:49.201 [INFO][4179] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0" Namespace="calico-system" Pod="calico-kube-controllers-64f94746cd-5r8bx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0", GenerateName:"calico-kube-controllers-64f94746cd-", Namespace:"calico-system", SelfLink:"", UID:"4ca70b04-3681-42b1-b3b8-746e67038cfe", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64f94746cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-64f94746cd-5r8bx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali23728ae47bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:35:49.218074 containerd[1468]: 2025-11-01 00:35:49.201 [INFO][4179] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0" Namespace="calico-system" Pod="calico-kube-controllers-64f94746cd-5r8bx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0" Nov 1 00:35:49.218074 containerd[1468]: 2025-11-01 00:35:49.201 [INFO][4179] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali23728ae47bd ContainerID="86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0" Namespace="calico-system" Pod="calico-kube-controllers-64f94746cd-5r8bx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0" Nov 1 00:35:49.218074 containerd[1468]: 2025-11-01 00:35:49.203 [INFO][4179] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0" Namespace="calico-system" Pod="calico-kube-controllers-64f94746cd-5r8bx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0" Nov 1 00:35:49.218074 containerd[1468]: 2025-11-01 00:35:49.203 [INFO][4179] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0" Namespace="calico-system" Pod="calico-kube-controllers-64f94746cd-5r8bx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0", GenerateName:"calico-kube-controllers-64f94746cd-", Namespace:"calico-system", SelfLink:"", UID:"4ca70b04-3681-42b1-b3b8-746e67038cfe", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64f94746cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0", Pod:"calico-kube-controllers-64f94746cd-5r8bx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali23728ae47bd", MAC:"5a:ed:e9:a0:d2:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:35:49.218074 containerd[1468]: 2025-11-01 00:35:49.214 [INFO][4179] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0" Namespace="calico-system" Pod="calico-kube-controllers-64f94746cd-5r8bx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0" Nov 1 00:35:49.236982 containerd[1468]: time="2025-11-01T00:35:49.236296278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:35:49.237132 containerd[1468]: time="2025-11-01T00:35:49.236964980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:35:49.237132 containerd[1468]: time="2025-11-01T00:35:49.236977294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:49.237132 containerd[1468]: time="2025-11-01T00:35:49.237051488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:49.259744 systemd[1]: Started cri-containerd-86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0.scope - libcontainer container 86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0. Nov 1 00:35:49.271312 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:35:49.302905 containerd[1468]: time="2025-11-01T00:35:49.302860114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64f94746cd-5r8bx,Uid:4ca70b04-3681-42b1-b3b8-746e67038cfe,Namespace:calico-system,Attempt:1,} returns sandbox id \"86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0\"" Nov 1 00:35:49.305722 containerd[1468]: time="2025-11-01T00:35:49.304947038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:35:49.308949 systemd-networkd[1389]: cali10bab976708: Link UP Nov 1 00:35:49.309161 systemd-networkd[1389]: cali10bab976708: Gained carrier Nov 1 00:35:49.320790 containerd[1468]: 2025-11-01 00:35:49.141 [INFO][4194] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:35:49.320790 containerd[1468]: 2025-11-01 00:35:49.159 [INFO][4194] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jzfns-eth0 csi-node-driver- calico-system 31c28b53-e76c-45d5-b66c-cb1d82d504b6 998 0 2025-11-01 00:35:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-jzfns eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali10bab976708 [] [] }} ContainerID="156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59" Namespace="calico-system" Pod="csi-node-driver-jzfns" WorkloadEndpoint="localhost-k8s-csi--node--driver--jzfns-" Nov 1 00:35:49.320790 containerd[1468]: 2025-11-01 00:35:49.159 [INFO][4194] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59" Namespace="calico-system" Pod="csi-node-driver-jzfns" WorkloadEndpoint="localhost-k8s-csi--node--driver--jzfns-eth0" Nov 1 00:35:49.320790 containerd[1468]: 2025-11-01 00:35:49.186 [INFO][4215] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59" HandleID="k8s-pod-network.156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59" Workload="localhost-k8s-csi--node--driver--jzfns-eth0" Nov 1 00:35:49.320790 containerd[1468]: 2025-11-01 00:35:49.186 [INFO][4215] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59" HandleID="k8s-pod-network.156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59" Workload="localhost-k8s-csi--node--driver--jzfns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c75c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jzfns", "timestamp":"2025-11-01 00:35:49.186028425 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:35:49.320790 containerd[1468]: 2025-11-01 00:35:49.186 [INFO][4215] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:35:49.320790 containerd[1468]: 2025-11-01 00:35:49.198 [INFO][4215] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:35:49.320790 containerd[1468]: 2025-11-01 00:35:49.198 [INFO][4215] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:35:49.320790 containerd[1468]: 2025-11-01 00:35:49.280 [INFO][4215] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59" host="localhost" Nov 1 00:35:49.320790 containerd[1468]: 2025-11-01 00:35:49.285 [INFO][4215] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:35:49.320790 containerd[1468]: 2025-11-01 00:35:49.288 [INFO][4215] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:35:49.320790 containerd[1468]: 2025-11-01 00:35:49.290 [INFO][4215] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:35:49.320790 containerd[1468]: 2025-11-01 00:35:49.293 [INFO][4215] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:35:49.320790 containerd[1468]: 2025-11-01 00:35:49.293 [INFO][4215] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59" host="localhost" Nov 1 00:35:49.320790 containerd[1468]: 2025-11-01 00:35:49.294 [INFO][4215] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59 Nov 1 00:35:49.320790 containerd[1468]: 2025-11-01 00:35:49.297 [INFO][4215] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59" host="localhost" Nov 1 00:35:49.320790 containerd[1468]: 2025-11-01 00:35:49.301 [INFO][4215] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59" host="localhost" Nov 1 00:35:49.320790 containerd[1468]: 2025-11-01 00:35:49.301 [INFO][4215] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59" host="localhost" Nov 1 00:35:49.320790 containerd[1468]: 2025-11-01 00:35:49.301 [INFO][4215] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:35:49.320790 containerd[1468]: 2025-11-01 00:35:49.301 [INFO][4215] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59" HandleID="k8s-pod-network.156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59" Workload="localhost-k8s-csi--node--driver--jzfns-eth0" Nov 1 00:35:49.321309 containerd[1468]: 2025-11-01 00:35:49.306 [INFO][4194] cni-plugin/k8s.go 418: Populated endpoint ContainerID="156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59" Namespace="calico-system" Pod="csi-node-driver-jzfns" WorkloadEndpoint="localhost-k8s-csi--node--driver--jzfns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jzfns-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31c28b53-e76c-45d5-b66c-cb1d82d504b6", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jzfns", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali10bab976708", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:35:49.321309 containerd[1468]: 2025-11-01 00:35:49.306 [INFO][4194] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59" Namespace="calico-system" Pod="csi-node-driver-jzfns" WorkloadEndpoint="localhost-k8s-csi--node--driver--jzfns-eth0" Nov 1 00:35:49.321309 containerd[1468]: 2025-11-01 00:35:49.306 [INFO][4194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali10bab976708 ContainerID="156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59" Namespace="calico-system" Pod="csi-node-driver-jzfns" WorkloadEndpoint="localhost-k8s-csi--node--driver--jzfns-eth0" Nov 1 00:35:49.321309 containerd[1468]: 2025-11-01 00:35:49.309 [INFO][4194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59" Namespace="calico-system" Pod="csi-node-driver-jzfns" WorkloadEndpoint="localhost-k8s-csi--node--driver--jzfns-eth0" Nov 1 00:35:49.321309 containerd[1468]: 2025-11-01 00:35:49.309 [INFO][4194] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59" Namespace="calico-system" Pod="csi-node-driver-jzfns" WorkloadEndpoint="localhost-k8s-csi--node--driver--jzfns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jzfns-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31c28b53-e76c-45d5-b66c-cb1d82d504b6", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59", Pod:"csi-node-driver-jzfns", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali10bab976708", MAC:"8a:00:80:2c:e5:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:35:49.321309 containerd[1468]: 2025-11-01 00:35:49.317 [INFO][4194] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59" Namespace="calico-system" Pod="csi-node-driver-jzfns" WorkloadEndpoint="localhost-k8s-csi--node--driver--jzfns-eth0" Nov 1 00:35:49.329787 systemd-networkd[1389]: calia8ae5718eb9: Gained IPv6LL Nov 1 00:35:49.339042 containerd[1468]: time="2025-11-01T00:35:49.338391189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:35:49.339042 containerd[1468]: time="2025-11-01T00:35:49.339019854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:35:49.339042 containerd[1468]: time="2025-11-01T00:35:49.339032880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:49.339139 containerd[1468]: time="2025-11-01T00:35:49.339112584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:49.356783 systemd[1]: Started cri-containerd-156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59.scope - libcontainer container 156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59. Nov 1 00:35:49.367253 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:35:49.377861 containerd[1468]: time="2025-11-01T00:35:49.377807355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jzfns,Uid:31c28b53-e76c-45d5-b66c-cb1d82d504b6,Namespace:calico-system,Attempt:1,} returns sandbox id \"156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59\"" Nov 1 00:35:49.615895 containerd[1468]: time="2025-11-01T00:35:49.615778491Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:35:49.622176 containerd[1468]: time="2025-11-01T00:35:49.622141832Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:35:49.622273 containerd[1468]: time="2025-11-01T00:35:49.622220825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:35:49.622339 kubelet[2499]: E1101 00:35:49.622303 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:35:49.622418 kubelet[2499]: E1101 00:35:49.622350 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:35:49.622643 kubelet[2499]: E1101 00:35:49.622564 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cxdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64f94746cd-5r8bx_calico-system(4ca70b04-3681-42b1-b3b8-746e67038cfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:35:49.622750 containerd[1468]: time="2025-11-01T00:35:49.622694761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:35:49.624166 kubelet[2499]: E1101 00:35:49.624135 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f94746cd-5r8bx" podUID="4ca70b04-3681-42b1-b3b8-746e67038cfe" Nov 1 00:35:49.949865 containerd[1468]: time="2025-11-01T00:35:49.949819079Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:35:49.950894 containerd[1468]: time="2025-11-01T00:35:49.950869088Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:35:49.950964 containerd[1468]: time="2025-11-01T00:35:49.950935397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:35:49.951117 kubelet[2499]: E1101 00:35:49.951061 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:35:49.951173 kubelet[2499]: E1101 00:35:49.951117 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:35:49.951315 kubelet[2499]: E1101 00:35:49.951239 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-278m6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jzfns_calico-system(31c28b53-e76c-45d5-b66c-cb1d82d504b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:35:49.953160 containerd[1468]: time="2025-11-01T00:35:49.953137393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:35:50.012517 containerd[1468]: time="2025-11-01T00:35:50.012485211Z" level=info msg="StopPodSandbox for \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\"" Nov 1 00:35:50.087089 containerd[1468]: 2025-11-01 00:35:50.053 [INFO][4356] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Nov 1 00:35:50.087089 containerd[1468]: 2025-11-01 00:35:50.053 [INFO][4356] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" iface="eth0" netns="/var/run/netns/cni-71859515-aa7f-c95d-93bf-df667a78b355" Nov 1 00:35:50.087089 containerd[1468]: 2025-11-01 00:35:50.054 [INFO][4356] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" iface="eth0" netns="/var/run/netns/cni-71859515-aa7f-c95d-93bf-df667a78b355" Nov 1 00:35:50.087089 containerd[1468]: 2025-11-01 00:35:50.054 [INFO][4356] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" iface="eth0" netns="/var/run/netns/cni-71859515-aa7f-c95d-93bf-df667a78b355" Nov 1 00:35:50.087089 containerd[1468]: 2025-11-01 00:35:50.054 [INFO][4356] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Nov 1 00:35:50.087089 containerd[1468]: 2025-11-01 00:35:50.054 [INFO][4356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Nov 1 00:35:50.087089 containerd[1468]: 2025-11-01 00:35:50.074 [INFO][4365] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" HandleID="k8s-pod-network.50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0" Nov 1 00:35:50.087089 containerd[1468]: 2025-11-01 00:35:50.074 [INFO][4365] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:35:50.087089 containerd[1468]: 2025-11-01 00:35:50.074 [INFO][4365] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:35:50.087089 containerd[1468]: 2025-11-01 00:35:50.079 [WARNING][4365] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" HandleID="k8s-pod-network.50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0" Nov 1 00:35:50.087089 containerd[1468]: 2025-11-01 00:35:50.079 [INFO][4365] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" HandleID="k8s-pod-network.50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0" Nov 1 00:35:50.087089 containerd[1468]: 2025-11-01 00:35:50.080 [INFO][4365] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:35:50.087089 containerd[1468]: 2025-11-01 00:35:50.083 [INFO][4356] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Nov 1 00:35:50.087471 containerd[1468]: time="2025-11-01T00:35:50.087274458Z" level=info msg="TearDown network for sandbox \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\" successfully" Nov 1 00:35:50.087471 containerd[1468]: time="2025-11-01T00:35:50.087301279Z" level=info msg="StopPodSandbox for \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\" returns successfully" Nov 1 00:35:50.087945 containerd[1468]: time="2025-11-01T00:35:50.087919333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68fc7bb9b7-c7qgt,Uid:42b9da1b-c5f5-468c-9b0b-bd955feccb34,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:35:50.089865 systemd[1]: run-netns-cni\x2d71859515\x2daa7f\x2dc95d\x2d93bf\x2ddf667a78b355.mount: Deactivated successfully. Nov 1 00:35:50.141377 kubelet[2499]: E1101 00:35:50.141217 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f94746cd-5r8bx" podUID="4ca70b04-3681-42b1-b3b8-746e67038cfe" Nov 1 00:35:50.186832 systemd-networkd[1389]: cali97ecd072bb2: Link UP Nov 1 00:35:50.187745 systemd-networkd[1389]: cali97ecd072bb2: Gained carrier Nov 1 00:35:50.201824 containerd[1468]: 2025-11-01 00:35:50.119 [INFO][4374] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:35:50.201824 containerd[1468]: 2025-11-01 00:35:50.128 [INFO][4374] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0 calico-apiserver-68fc7bb9b7- calico-apiserver 42b9da1b-c5f5-468c-9b0b-bd955feccb34 1028 0 2025-11-01 00:35:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68fc7bb9b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68fc7bb9b7-c7qgt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali97ecd072bb2 [] [] }} ContainerID="27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5" Namespace="calico-apiserver" Pod="calico-apiserver-68fc7bb9b7-c7qgt" WorkloadEndpoint="localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-" Nov 1 00:35:50.201824 containerd[1468]: 2025-11-01 00:35:50.128 [INFO][4374] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5" Namespace="calico-apiserver" Pod="calico-apiserver-68fc7bb9b7-c7qgt" WorkloadEndpoint="localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0" Nov 1 00:35:50.201824 containerd[1468]: 2025-11-01 00:35:50.157 [INFO][4387] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5" HandleID="k8s-pod-network.27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0" Nov 1 00:35:50.201824 containerd[1468]: 2025-11-01 00:35:50.157 [INFO][4387] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5" HandleID="k8s-pod-network.27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001353e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68fc7bb9b7-c7qgt", "timestamp":"2025-11-01 00:35:50.157350581 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:35:50.201824 containerd[1468]: 2025-11-01 00:35:50.157 [INFO][4387] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:35:50.201824 containerd[1468]: 2025-11-01 00:35:50.157 [INFO][4387] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:35:50.201824 containerd[1468]: 2025-11-01 00:35:50.157 [INFO][4387] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:35:50.201824 containerd[1468]: 2025-11-01 00:35:50.162 [INFO][4387] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5" host="localhost" Nov 1 00:35:50.201824 containerd[1468]: 2025-11-01 00:35:50.166 [INFO][4387] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:35:50.201824 containerd[1468]: 2025-11-01 00:35:50.169 [INFO][4387] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:35:50.201824 containerd[1468]: 2025-11-01 00:35:50.170 [INFO][4387] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:35:50.201824 containerd[1468]: 2025-11-01 00:35:50.174 [INFO][4387] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:35:50.201824 containerd[1468]: 2025-11-01 00:35:50.174 [INFO][4387] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5" host="localhost" Nov 1 00:35:50.201824 containerd[1468]: 2025-11-01 00:35:50.175 [INFO][4387] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5 Nov 1 00:35:50.201824 containerd[1468]: 2025-11-01 00:35:50.178 [INFO][4387] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5" host="localhost" Nov 1 00:35:50.201824 containerd[1468]: 2025-11-01 00:35:50.181 [INFO][4387] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5" host="localhost" Nov 1 00:35:50.201824 containerd[1468]: 2025-11-01 00:35:50.181 [INFO][4387] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5" host="localhost" Nov 1 00:35:50.201824 containerd[1468]: 2025-11-01 00:35:50.182 [INFO][4387] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:35:50.201824 containerd[1468]: 2025-11-01 00:35:50.182 [INFO][4387] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5" HandleID="k8s-pod-network.27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0" Nov 1 00:35:50.202484 containerd[1468]: 2025-11-01 00:35:50.185 [INFO][4374] cni-plugin/k8s.go 418: Populated endpoint ContainerID="27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5" Namespace="calico-apiserver" Pod="calico-apiserver-68fc7bb9b7-c7qgt" WorkloadEndpoint="localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0", GenerateName:"calico-apiserver-68fc7bb9b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"42b9da1b-c5f5-468c-9b0b-bd955feccb34", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68fc7bb9b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68fc7bb9b7-c7qgt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97ecd072bb2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:35:50.202484 containerd[1468]: 2025-11-01 00:35:50.185 [INFO][4374] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5" Namespace="calico-apiserver" Pod="calico-apiserver-68fc7bb9b7-c7qgt" WorkloadEndpoint="localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0" Nov 1 00:35:50.202484 containerd[1468]: 2025-11-01 00:35:50.185 [INFO][4374] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97ecd072bb2 ContainerID="27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5" Namespace="calico-apiserver" Pod="calico-apiserver-68fc7bb9b7-c7qgt" WorkloadEndpoint="localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0" Nov 1 00:35:50.202484 containerd[1468]: 2025-11-01 00:35:50.187 [INFO][4374] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5" Namespace="calico-apiserver" Pod="calico-apiserver-68fc7bb9b7-c7qgt" WorkloadEndpoint="localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0" Nov 1 00:35:50.202484 containerd[1468]: 2025-11-01 00:35:50.188 [INFO][4374] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5" Namespace="calico-apiserver" Pod="calico-apiserver-68fc7bb9b7-c7qgt" WorkloadEndpoint="localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0", GenerateName:"calico-apiserver-68fc7bb9b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"42b9da1b-c5f5-468c-9b0b-bd955feccb34", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68fc7bb9b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5", Pod:"calico-apiserver-68fc7bb9b7-c7qgt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97ecd072bb2", MAC:"12:a6:04:89:db:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:35:50.202484 containerd[1468]: 2025-11-01 00:35:50.198 [INFO][4374] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5" Namespace="calico-apiserver" Pod="calico-apiserver-68fc7bb9b7-c7qgt" WorkloadEndpoint="localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0" Nov 1 00:35:50.219935 containerd[1468]: time="2025-11-01T00:35:50.219797919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:35:50.220109 containerd[1468]: time="2025-11-01T00:35:50.219917160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:35:50.220109 containerd[1468]: time="2025-11-01T00:35:50.219944151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:50.220109 containerd[1468]: time="2025-11-01T00:35:50.220040348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:50.245759 systemd[1]: Started cri-containerd-27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5.scope - libcontainer container 27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5. Nov 1 00:35:50.257325 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:35:50.275539 containerd[1468]: time="2025-11-01T00:35:50.275507102Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:35:50.277300 containerd[1468]: time="2025-11-01T00:35:50.277261000Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:35:50.277365 containerd[1468]: time="2025-11-01T00:35:50.277336736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:35:50.277505 kubelet[2499]: E1101 00:35:50.277443 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:35:50.277549 kubelet[2499]: E1101 00:35:50.277514 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:35:50.277668 kubelet[2499]: E1101 00:35:50.277639 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-278m6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jzfns_calico-system(31c28b53-e76c-45d5-b66c-cb1d82d504b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:35:50.278748 kubelet[2499]: E1101 00:35:50.278721 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jzfns" podUID="31c28b53-e76c-45d5-b66c-cb1d82d504b6" Nov 1 00:35:50.281653 containerd[1468]: time="2025-11-01T00:35:50.281613517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68fc7bb9b7-c7qgt,Uid:42b9da1b-c5f5-468c-9b0b-bd955feccb34,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5\"" Nov 1 00:35:50.283804 containerd[1468]: time="2025-11-01T00:35:50.283775703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:35:50.481747 systemd-networkd[1389]: cali10bab976708: Gained IPv6LL Nov 1 00:35:50.643257 containerd[1468]: time="2025-11-01T00:35:50.643219466Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:35:50.644502 containerd[1468]: time="2025-11-01T00:35:50.644397600Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:35:50.644502 containerd[1468]: time="2025-11-01T00:35:50.644447717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:35:50.644673 kubelet[2499]: E1101 00:35:50.644630 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:35:50.644720 kubelet[2499]: E1101 00:35:50.644672 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:35:50.644823 kubelet[2499]: E1101 00:35:50.644780 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4vct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68fc7bb9b7-c7qgt_calico-apiserver(42b9da1b-c5f5-468c-9b0b-bd955feccb34): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:35:50.646366 kubelet[2499]: E1101 00:35:50.646105 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-c7qgt" podUID="42b9da1b-c5f5-468c-9b0b-bd955feccb34" Nov 1 00:35:50.674751 systemd-networkd[1389]: cali23728ae47bd: Gained IPv6LL Nov 1 00:35:51.012274 containerd[1468]: time="2025-11-01T00:35:51.012225181Z" level=info msg="StopPodSandbox for \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\"" Nov 1 00:35:51.012414 containerd[1468]: time="2025-11-01T00:35:51.012282712Z" level=info msg="StopPodSandbox for \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\"" Nov 1 00:35:51.012414 containerd[1468]: time="2025-11-01T00:35:51.012237175Z" level=info msg="StopPodSandbox for \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\"" Nov 1 00:35:51.111249 containerd[1468]: 2025-11-01 00:35:51.057 [INFO][4488] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Nov 1 00:35:51.111249 containerd[1468]: 2025-11-01 00:35:51.060 [INFO][4488] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" iface="eth0" netns="/var/run/netns/cni-7ef781b7-62f9-9c0e-d7d9-ca5a7b7b944b" Nov 1 00:35:51.111249 containerd[1468]: 2025-11-01 00:35:51.060 [INFO][4488] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" iface="eth0" netns="/var/run/netns/cni-7ef781b7-62f9-9c0e-d7d9-ca5a7b7b944b" Nov 1 00:35:51.111249 containerd[1468]: 2025-11-01 00:35:51.061 [INFO][4488] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" iface="eth0" netns="/var/run/netns/cni-7ef781b7-62f9-9c0e-d7d9-ca5a7b7b944b" Nov 1 00:35:51.111249 containerd[1468]: 2025-11-01 00:35:51.061 [INFO][4488] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Nov 1 00:35:51.111249 containerd[1468]: 2025-11-01 00:35:51.061 [INFO][4488] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Nov 1 00:35:51.111249 containerd[1468]: 2025-11-01 00:35:51.092 [INFO][4524] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" HandleID="k8s-pod-network.2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Workload="localhost-k8s-goldmane--666569f655--rlz6p-eth0" Nov 1 00:35:51.111249 containerd[1468]: 2025-11-01 00:35:51.093 [INFO][4524] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:35:51.111249 containerd[1468]: 2025-11-01 00:35:51.093 [INFO][4524] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:35:51.111249 containerd[1468]: 2025-11-01 00:35:51.102 [WARNING][4524] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" HandleID="k8s-pod-network.2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Workload="localhost-k8s-goldmane--666569f655--rlz6p-eth0" Nov 1 00:35:51.111249 containerd[1468]: 2025-11-01 00:35:51.102 [INFO][4524] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" HandleID="k8s-pod-network.2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Workload="localhost-k8s-goldmane--666569f655--rlz6p-eth0" Nov 1 00:35:51.111249 containerd[1468]: 2025-11-01 00:35:51.104 [INFO][4524] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:35:51.111249 containerd[1468]: 2025-11-01 00:35:51.107 [INFO][4488] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Nov 1 00:35:51.116476 systemd[1]: run-netns-cni\x2d7ef781b7\x2d62f9\x2d9c0e\x2dd7d9\x2dca5a7b7b944b.mount: Deactivated successfully. Nov 1 00:35:51.117304 containerd[1468]: time="2025-11-01T00:35:51.117162086Z" level=info msg="TearDown network for sandbox \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\" successfully" Nov 1 00:35:51.117304 containerd[1468]: time="2025-11-01T00:35:51.117203847Z" level=info msg="StopPodSandbox for \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\" returns successfully" Nov 1 00:35:51.117904 containerd[1468]: time="2025-11-01T00:35:51.117878078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rlz6p,Uid:331a1960-88ad-4608-9f70-708ee400d030,Namespace:calico-system,Attempt:1,}" Nov 1 00:35:51.119746 containerd[1468]: 2025-11-01 00:35:51.067 [INFO][4493] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Nov 1 00:35:51.119746 containerd[1468]: 2025-11-01 00:35:51.069 [INFO][4493] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" iface="eth0" netns="/var/run/netns/cni-ea8548bf-2266-5053-c5da-872ab825efab" Nov 1 00:35:51.119746 containerd[1468]: 2025-11-01 00:35:51.069 [INFO][4493] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" iface="eth0" netns="/var/run/netns/cni-ea8548bf-2266-5053-c5da-872ab825efab" Nov 1 00:35:51.119746 containerd[1468]: 2025-11-01 00:35:51.069 [INFO][4493] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" iface="eth0" netns="/var/run/netns/cni-ea8548bf-2266-5053-c5da-872ab825efab" Nov 1 00:35:51.119746 containerd[1468]: 2025-11-01 00:35:51.070 [INFO][4493] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Nov 1 00:35:51.119746 containerd[1468]: 2025-11-01 00:35:51.070 [INFO][4493] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Nov 1 00:35:51.119746 containerd[1468]: 2025-11-01 00:35:51.106 [INFO][4530] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" HandleID="k8s-pod-network.5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Workload="localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0" Nov 1 00:35:51.119746 containerd[1468]: 2025-11-01 00:35:51.107 [INFO][4530] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:35:51.119746 containerd[1468]: 2025-11-01 00:35:51.107 [INFO][4530] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:35:51.119746 containerd[1468]: 2025-11-01 00:35:51.111 [WARNING][4530] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" HandleID="k8s-pod-network.5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Workload="localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0" Nov 1 00:35:51.119746 containerd[1468]: 2025-11-01 00:35:51.111 [INFO][4530] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" HandleID="k8s-pod-network.5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Workload="localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0" Nov 1 00:35:51.119746 containerd[1468]: 2025-11-01 00:35:51.113 [INFO][4530] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:35:51.119746 containerd[1468]: 2025-11-01 00:35:51.116 [INFO][4493] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Nov 1 00:35:51.120061 containerd[1468]: time="2025-11-01T00:35:51.119951650Z" level=info msg="TearDown network for sandbox \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\" successfully" Nov 1 00:35:51.120061 containerd[1468]: time="2025-11-01T00:35:51.119982500Z" level=info msg="StopPodSandbox for \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\" returns successfully" Nov 1 00:35:51.121965 kubelet[2499]: E1101 00:35:51.121775 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:51.122760 containerd[1468]: time="2025-11-01T00:35:51.122714372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ct4jw,Uid:02d687a0-8306-485c-897b-e3fc603e4632,Namespace:kube-system,Attempt:1,}" Nov 1 00:35:51.123489 systemd[1]: run-netns-cni\x2dea8548bf\x2d2266\x2d5053\x2dc5da\x2d872ab825efab.mount: Deactivated successfully. Nov 1 00:35:51.130897 containerd[1468]: 2025-11-01 00:35:51.080 [INFO][4510] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Nov 1 00:35:51.130897 containerd[1468]: 2025-11-01 00:35:51.080 [INFO][4510] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" iface="eth0" netns="/var/run/netns/cni-335cdb24-0419-e39e-1834-937291fef939" Nov 1 00:35:51.130897 containerd[1468]: 2025-11-01 00:35:51.080 [INFO][4510] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" iface="eth0" netns="/var/run/netns/cni-335cdb24-0419-e39e-1834-937291fef939" Nov 1 00:35:51.130897 containerd[1468]: 2025-11-01 00:35:51.081 [INFO][4510] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" iface="eth0" netns="/var/run/netns/cni-335cdb24-0419-e39e-1834-937291fef939" Nov 1 00:35:51.130897 containerd[1468]: 2025-11-01 00:35:51.081 [INFO][4510] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Nov 1 00:35:51.130897 containerd[1468]: 2025-11-01 00:35:51.081 [INFO][4510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Nov 1 00:35:51.130897 containerd[1468]: 2025-11-01 00:35:51.109 [INFO][4538] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" HandleID="k8s-pod-network.8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0" Nov 1 00:35:51.130897 containerd[1468]: 2025-11-01 00:35:51.109 [INFO][4538] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:35:51.130897 containerd[1468]: 2025-11-01 00:35:51.113 [INFO][4538] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:35:51.130897 containerd[1468]: 2025-11-01 00:35:51.121 [WARNING][4538] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" HandleID="k8s-pod-network.8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0" Nov 1 00:35:51.130897 containerd[1468]: 2025-11-01 00:35:51.121 [INFO][4538] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" HandleID="k8s-pod-network.8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0" Nov 1 00:35:51.130897 containerd[1468]: 2025-11-01 00:35:51.124 [INFO][4538] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:35:51.130897 containerd[1468]: 2025-11-01 00:35:51.127 [INFO][4510] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Nov 1 00:35:51.131239 containerd[1468]: time="2025-11-01T00:35:51.131084476Z" level=info msg="TearDown network for sandbox \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\" successfully" Nov 1 00:35:51.131239 containerd[1468]: time="2025-11-01T00:35:51.131110265Z" level=info msg="StopPodSandbox for \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\" returns successfully" Nov 1 00:35:51.131696 containerd[1468]: time="2025-11-01T00:35:51.131665878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68fc7bb9b7-tvhcs,Uid:d57a8509-e37c-4d69-93aa-35fdadef5de6,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:35:51.134504 systemd[1]: run-netns-cni\x2d335cdb24\x2d0419\x2de39e\x2d1834\x2d937291fef939.mount: Deactivated successfully. Nov 1 00:35:51.146001 kubelet[2499]: E1101 00:35:51.145833 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f94746cd-5r8bx" podUID="4ca70b04-3681-42b1-b3b8-746e67038cfe" Nov 1 00:35:51.146001 kubelet[2499]: E1101 00:35:51.145879 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-c7qgt" podUID="42b9da1b-c5f5-468c-9b0b-bd955feccb34" Nov 1 00:35:51.146421 kubelet[2499]: E1101 00:35:51.146301 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jzfns" podUID="31c28b53-e76c-45d5-b66c-cb1d82d504b6" Nov 1 00:35:51.288944 systemd-networkd[1389]: calibb72cf79e7a: Link UP Nov 1 00:35:51.289572 systemd-networkd[1389]: calibb72cf79e7a: Gained carrier Nov 1 00:35:51.301419 containerd[1468]: 2025-11-01 00:35:51.181 [INFO][4549] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:35:51.301419 containerd[1468]: 2025-11-01 00:35:51.195 [INFO][4549] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--rlz6p-eth0 goldmane-666569f655- calico-system 331a1960-88ad-4608-9f70-708ee400d030 1053 0 2025-11-01 00:35:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-rlz6p eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibb72cf79e7a [] [] }} ContainerID="dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a" Namespace="calico-system" Pod="goldmane-666569f655-rlz6p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rlz6p-" Nov 1 00:35:51.301419 containerd[1468]: 2025-11-01 00:35:51.195 [INFO][4549] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a" Namespace="calico-system" Pod="goldmane-666569f655-rlz6p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rlz6p-eth0" Nov 1 00:35:51.301419 containerd[1468]: 2025-11-01 00:35:51.238 [INFO][4598] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a" HandleID="k8s-pod-network.dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a" Workload="localhost-k8s-goldmane--666569f655--rlz6p-eth0" Nov 1 00:35:51.301419 containerd[1468]: 2025-11-01 00:35:51.238 [INFO][4598] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a" HandleID="k8s-pod-network.dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a" Workload="localhost-k8s-goldmane--666569f655--rlz6p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-rlz6p", "timestamp":"2025-11-01 00:35:51.238206077 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:35:51.301419 containerd[1468]: 2025-11-01 00:35:51.238 [INFO][4598] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:35:51.301419 containerd[1468]: 2025-11-01 00:35:51.238 [INFO][4598] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:35:51.301419 containerd[1468]: 2025-11-01 00:35:51.238 [INFO][4598] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:35:51.301419 containerd[1468]: 2025-11-01 00:35:51.248 [INFO][4598] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a" host="localhost" Nov 1 00:35:51.301419 containerd[1468]: 2025-11-01 00:35:51.253 [INFO][4598] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:35:51.301419 containerd[1468]: 2025-11-01 00:35:51.257 [INFO][4598] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:35:51.301419 containerd[1468]: 2025-11-01 00:35:51.258 [INFO][4598] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:35:51.301419 containerd[1468]: 2025-11-01 00:35:51.259 [INFO][4598] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:35:51.301419 containerd[1468]: 2025-11-01 00:35:51.260 [INFO][4598] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a" host="localhost" Nov 1 00:35:51.301419 containerd[1468]: 2025-11-01 00:35:51.261 [INFO][4598] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a Nov 1 00:35:51.301419 containerd[1468]: 2025-11-01 00:35:51.278 [INFO][4598] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a" host="localhost" Nov 1 00:35:51.301419 containerd[1468]: 2025-11-01 00:35:51.284 [INFO][4598] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a" host="localhost" Nov 1 00:35:51.301419 containerd[1468]: 2025-11-01 00:35:51.284 [INFO][4598] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a" host="localhost" Nov 1 00:35:51.301419 containerd[1468]: 2025-11-01 00:35:51.284 [INFO][4598] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:35:51.301419 containerd[1468]: 2025-11-01 00:35:51.284 [INFO][4598] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a" HandleID="k8s-pod-network.dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a" Workload="localhost-k8s-goldmane--666569f655--rlz6p-eth0" Nov 1 00:35:51.302012 containerd[1468]: 2025-11-01 00:35:51.286 [INFO][4549] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a" Namespace="calico-system" Pod="goldmane-666569f655-rlz6p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rlz6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--rlz6p-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"331a1960-88ad-4608-9f70-708ee400d030", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-rlz6p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibb72cf79e7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:35:51.302012 containerd[1468]: 2025-11-01 00:35:51.287 [INFO][4549] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a" Namespace="calico-system" Pod="goldmane-666569f655-rlz6p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rlz6p-eth0" Nov 1 00:35:51.302012 containerd[1468]: 2025-11-01 00:35:51.287 [INFO][4549] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb72cf79e7a ContainerID="dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a" Namespace="calico-system" Pod="goldmane-666569f655-rlz6p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rlz6p-eth0" Nov 1 00:35:51.302012 containerd[1468]: 2025-11-01 00:35:51.289 [INFO][4549] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a" Namespace="calico-system" Pod="goldmane-666569f655-rlz6p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rlz6p-eth0" Nov 1 00:35:51.302012 containerd[1468]: 2025-11-01 00:35:51.289 [INFO][4549] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a" Namespace="calico-system" Pod="goldmane-666569f655-rlz6p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rlz6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--rlz6p-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"331a1960-88ad-4608-9f70-708ee400d030", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a", Pod:"goldmane-666569f655-rlz6p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibb72cf79e7a", MAC:"7a:96:27:fe:87:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:35:51.302012 containerd[1468]: 2025-11-01 00:35:51.298 [INFO][4549] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a" Namespace="calico-system" Pod="goldmane-666569f655-rlz6p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rlz6p-eth0" Nov 1 00:35:51.378412 containerd[1468]: time="2025-11-01T00:35:51.377714415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:35:51.378412 containerd[1468]: time="2025-11-01T00:35:51.378292931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:35:51.378412 containerd[1468]: time="2025-11-01T00:35:51.378310144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:51.378936 containerd[1468]: time="2025-11-01T00:35:51.378640482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:51.397761 systemd[1]: Started cri-containerd-dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a.scope - libcontainer container dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a. Nov 1 00:35:51.411971 systemd-networkd[1389]: calia3cd08bb14c: Link UP Nov 1 00:35:51.412819 systemd-networkd[1389]: calia3cd08bb14c: Gained carrier Nov 1 00:35:51.420422 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:35:51.438699 containerd[1468]: 2025-11-01 00:35:51.195 [INFO][4561] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:35:51.438699 containerd[1468]: 2025-11-01 00:35:51.211 [INFO][4561] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0 coredns-668d6bf9bc- kube-system 02d687a0-8306-485c-897b-e3fc603e4632 1054 0 2025-11-01 00:35:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-ct4jw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia3cd08bb14c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-ct4jw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ct4jw-" Nov 1 00:35:51.438699 containerd[1468]: 2025-11-01 00:35:51.211 [INFO][4561] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-ct4jw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0" Nov 1 00:35:51.438699 containerd[1468]: 2025-11-01 00:35:51.239 [INFO][4608] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9" HandleID="k8s-pod-network.c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9" Workload="localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0" Nov 1 00:35:51.438699 containerd[1468]: 2025-11-01 00:35:51.240 [INFO][4608] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9" HandleID="k8s-pod-network.c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9" Workload="localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f250), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-ct4jw", "timestamp":"2025-11-01 00:35:51.239863857 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:35:51.438699 containerd[1468]: 2025-11-01 00:35:51.242 [INFO][4608] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:35:51.438699 containerd[1468]: 2025-11-01 00:35:51.284 [INFO][4608] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:35:51.438699 containerd[1468]: 2025-11-01 00:35:51.284 [INFO][4608] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:35:51.438699 containerd[1468]: 2025-11-01 00:35:51.363 [INFO][4608] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9" host="localhost" Nov 1 00:35:51.438699 containerd[1468]: 2025-11-01 00:35:51.368 [INFO][4608] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:35:51.438699 containerd[1468]: 2025-11-01 00:35:51.375 [INFO][4608] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:35:51.438699 containerd[1468]: 2025-11-01 00:35:51.377 [INFO][4608] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:35:51.438699 containerd[1468]: 2025-11-01 00:35:51.380 [INFO][4608] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:35:51.438699 containerd[1468]: 2025-11-01 00:35:51.380 [INFO][4608] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9" host="localhost" Nov 1 00:35:51.438699 containerd[1468]: 2025-11-01 00:35:51.384 [INFO][4608] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9 Nov 1 00:35:51.438699 containerd[1468]: 2025-11-01 00:35:51.393 [INFO][4608] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9" host="localhost" Nov 1 00:35:51.438699 containerd[1468]: 2025-11-01 00:35:51.401 [INFO][4608] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9" host="localhost" Nov 1 00:35:51.438699 containerd[1468]: 2025-11-01 00:35:51.401 [INFO][4608] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9" host="localhost" Nov 1 00:35:51.438699 containerd[1468]: 2025-11-01 00:35:51.401 [INFO][4608] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:35:51.438699 containerd[1468]: 2025-11-01 00:35:51.401 [INFO][4608] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9" HandleID="k8s-pod-network.c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9" Workload="localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0" Nov 1 00:35:51.439228 containerd[1468]: 2025-11-01 00:35:51.406 [INFO][4561] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-ct4jw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"02d687a0-8306-485c-897b-e3fc603e4632", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-ct4jw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3cd08bb14c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:35:51.439228 containerd[1468]: 2025-11-01 00:35:51.406 [INFO][4561] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-ct4jw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0" Nov 1 00:35:51.439228 containerd[1468]: 2025-11-01 00:35:51.406 [INFO][4561] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3cd08bb14c ContainerID="c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-ct4jw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0" Nov 1 00:35:51.439228 containerd[1468]: 2025-11-01 00:35:51.413 [INFO][4561] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-ct4jw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0" Nov 1 00:35:51.439228 containerd[1468]: 2025-11-01 00:35:51.413 [INFO][4561] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-ct4jw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"02d687a0-8306-485c-897b-e3fc603e4632", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9", Pod:"coredns-668d6bf9bc-ct4jw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3cd08bb14c", MAC:"52:92:87:85:f3:b0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:35:51.439228 containerd[1468]: 2025-11-01 00:35:51.430 [INFO][4561] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-ct4jw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0" Nov 1 00:35:51.456189 containerd[1468]: time="2025-11-01T00:35:51.456138591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rlz6p,Uid:331a1960-88ad-4608-9f70-708ee400d030,Namespace:calico-system,Attempt:1,} returns sandbox id \"dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a\"" Nov 1 00:35:51.459017 containerd[1468]: time="2025-11-01T00:35:51.458991537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:35:51.487958 containerd[1468]: time="2025-11-01T00:35:51.484419783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:35:51.487958 containerd[1468]: time="2025-11-01T00:35:51.484490298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:35:51.487958 containerd[1468]: time="2025-11-01T00:35:51.484503765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:51.487958 containerd[1468]: time="2025-11-01T00:35:51.487716335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:51.516748 systemd[1]: Started cri-containerd-c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9.scope - libcontainer container c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9. Nov 1 00:35:51.524369 systemd-networkd[1389]: cali91bba4e6801: Link UP Nov 1 00:35:51.525339 systemd-networkd[1389]: cali91bba4e6801: Gained carrier Nov 1 00:35:51.541046 containerd[1468]: 2025-11-01 00:35:51.207 [INFO][4578] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:35:51.541046 containerd[1468]: 2025-11-01 00:35:51.221 [INFO][4578] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0 calico-apiserver-68fc7bb9b7- calico-apiserver d57a8509-e37c-4d69-93aa-35fdadef5de6 1055 0 2025-11-01 00:35:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68fc7bb9b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68fc7bb9b7-tvhcs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali91bba4e6801 [] [] }} ContainerID="f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c" Namespace="calico-apiserver" Pod="calico-apiserver-68fc7bb9b7-tvhcs" WorkloadEndpoint="localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-" Nov 1 00:35:51.541046 containerd[1468]: 2025-11-01 00:35:51.221 [INFO][4578] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c" Namespace="calico-apiserver" Pod="calico-apiserver-68fc7bb9b7-tvhcs" WorkloadEndpoint="localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0" Nov 1 00:35:51.541046 containerd[1468]: 2025-11-01 00:35:51.260 [INFO][4614] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c" HandleID="k8s-pod-network.f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0" Nov 1 00:35:51.541046 containerd[1468]: 2025-11-01 00:35:51.260 [INFO][4614] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c" HandleID="k8s-pod-network.f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f040), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68fc7bb9b7-tvhcs", "timestamp":"2025-11-01 00:35:51.260284939 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:35:51.541046 containerd[1468]: 2025-11-01 00:35:51.260 [INFO][4614] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:35:51.541046 containerd[1468]: 2025-11-01 00:35:51.401 [INFO][4614] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:35:51.541046 containerd[1468]: 2025-11-01 00:35:51.401 [INFO][4614] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:35:51.541046 containerd[1468]: 2025-11-01 00:35:51.454 [INFO][4614] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c" host="localhost" Nov 1 00:35:51.541046 containerd[1468]: 2025-11-01 00:35:51.473 [INFO][4614] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:35:51.541046 containerd[1468]: 2025-11-01 00:35:51.486 [INFO][4614] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:35:51.541046 containerd[1468]: 2025-11-01 00:35:51.491 [INFO][4614] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:35:51.541046 containerd[1468]: 2025-11-01 00:35:51.498 [INFO][4614] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:35:51.541046 containerd[1468]: 2025-11-01 00:35:51.498 [INFO][4614] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c" host="localhost" Nov 1 00:35:51.541046 containerd[1468]: 2025-11-01 00:35:51.502 [INFO][4614] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c Nov 1 00:35:51.541046 containerd[1468]: 2025-11-01 00:35:51.507 [INFO][4614] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c" host="localhost" Nov 1 00:35:51.541046 containerd[1468]: 2025-11-01 00:35:51.514 [INFO][4614] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c" host="localhost" Nov 1 00:35:51.541046 containerd[1468]: 2025-11-01 00:35:51.515 [INFO][4614] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c" host="localhost" Nov 1 00:35:51.541046 containerd[1468]: 2025-11-01 00:35:51.516 [INFO][4614] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:35:51.541046 containerd[1468]: 2025-11-01 00:35:51.516 [INFO][4614] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c" HandleID="k8s-pod-network.f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0" Nov 1 00:35:51.542270 containerd[1468]: 2025-11-01 00:35:51.521 [INFO][4578] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c" Namespace="calico-apiserver" Pod="calico-apiserver-68fc7bb9b7-tvhcs" WorkloadEndpoint="localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0", GenerateName:"calico-apiserver-68fc7bb9b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"d57a8509-e37c-4d69-93aa-35fdadef5de6", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68fc7bb9b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68fc7bb9b7-tvhcs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali91bba4e6801", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:35:51.542270 containerd[1468]: 2025-11-01 00:35:51.521 [INFO][4578] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c" Namespace="calico-apiserver" Pod="calico-apiserver-68fc7bb9b7-tvhcs" WorkloadEndpoint="localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0" Nov 1 00:35:51.542270 containerd[1468]: 2025-11-01 00:35:51.521 [INFO][4578] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali91bba4e6801 ContainerID="f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c" Namespace="calico-apiserver" Pod="calico-apiserver-68fc7bb9b7-tvhcs" WorkloadEndpoint="localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0" Nov 1 00:35:51.542270 containerd[1468]: 2025-11-01 00:35:51.525 [INFO][4578] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c" Namespace="calico-apiserver" Pod="calico-apiserver-68fc7bb9b7-tvhcs" WorkloadEndpoint="localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0" Nov 1 00:35:51.542270 containerd[1468]: 2025-11-01 00:35:51.526 [INFO][4578] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c" Namespace="calico-apiserver" Pod="calico-apiserver-68fc7bb9b7-tvhcs" WorkloadEndpoint="localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0", GenerateName:"calico-apiserver-68fc7bb9b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"d57a8509-e37c-4d69-93aa-35fdadef5de6", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68fc7bb9b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c", Pod:"calico-apiserver-68fc7bb9b7-tvhcs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali91bba4e6801", MAC:"06:45:9d:f1:4d:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:35:51.542270 containerd[1468]: 2025-11-01 00:35:51.533 [INFO][4578] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c" Namespace="calico-apiserver" Pod="calico-apiserver-68fc7bb9b7-tvhcs" WorkloadEndpoint="localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0" Nov 1 00:35:51.545582 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:35:51.565714 containerd[1468]: time="2025-11-01T00:35:51.564810324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:35:51.565714 containerd[1468]: time="2025-11-01T00:35:51.564865310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:35:51.565714 containerd[1468]: time="2025-11-01T00:35:51.564878025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:51.565714 containerd[1468]: time="2025-11-01T00:35:51.564967888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:51.577045 containerd[1468]: time="2025-11-01T00:35:51.576981053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ct4jw,Uid:02d687a0-8306-485c-897b-e3fc603e4632,Namespace:kube-system,Attempt:1,} returns sandbox id \"c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9\"" Nov 1 00:35:51.578231 kubelet[2499]: E1101 00:35:51.577846 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:51.580477 containerd[1468]: time="2025-11-01T00:35:51.580398740Z" level=info msg="CreateContainer within sandbox \"c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:35:51.586772 systemd[1]: Started cri-containerd-f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c.scope - libcontainer container f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c. Nov 1 00:35:51.603914 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:35:51.609157 containerd[1468]: time="2025-11-01T00:35:51.609118006Z" level=info msg="CreateContainer within sandbox \"c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7f8b547f78ab34f5eb274e24050cfa9dc08968d121fff9095b61ca93ffa5046e\"" Nov 1 00:35:51.617036 containerd[1468]: time="2025-11-01T00:35:51.615728765Z" level=info msg="StartContainer for \"7f8b547f78ab34f5eb274e24050cfa9dc08968d121fff9095b61ca93ffa5046e\"" Nov 1 00:35:51.633787 systemd-networkd[1389]: cali97ecd072bb2: Gained IPv6LL Nov 1 00:35:51.658216 systemd[1]: Started cri-containerd-7f8b547f78ab34f5eb274e24050cfa9dc08968d121fff9095b61ca93ffa5046e.scope - libcontainer container 7f8b547f78ab34f5eb274e24050cfa9dc08968d121fff9095b61ca93ffa5046e. Nov 1 00:35:51.667483 containerd[1468]: time="2025-11-01T00:35:51.666889504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68fc7bb9b7-tvhcs,Uid:d57a8509-e37c-4d69-93aa-35fdadef5de6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c\"" Nov 1 00:35:51.704866 containerd[1468]: time="2025-11-01T00:35:51.704674187Z" level=info msg="StartContainer for \"7f8b547f78ab34f5eb274e24050cfa9dc08968d121fff9095b61ca93ffa5046e\" returns successfully" Nov 1 00:35:51.776268 containerd[1468]: time="2025-11-01T00:35:51.776207615Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:35:51.783435 containerd[1468]: time="2025-11-01T00:35:51.783396540Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:35:51.783498 containerd[1468]: time="2025-11-01T00:35:51.783439722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:35:51.783655 kubelet[2499]: E1101 00:35:51.783613 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:35:51.783759 kubelet[2499]: E1101 00:35:51.783659 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:35:51.783958 kubelet[2499]: E1101 00:35:51.783910 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-djwph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rlz6p_calico-system(331a1960-88ad-4608-9f70-708ee400d030): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:35:51.784323 containerd[1468]: time="2025-11-01T00:35:51.784282709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:35:51.785850 kubelet[2499]: E1101 00:35:51.785811 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rlz6p" podUID="331a1960-88ad-4608-9f70-708ee400d030" Nov 1 00:35:52.012817 containerd[1468]: time="2025-11-01T00:35:52.012764734Z" level=info msg="StopPodSandbox for \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\"" Nov 1 00:35:52.089684 containerd[1468]: 2025-11-01 00:35:52.056 [INFO][4844] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Nov 1 00:35:52.089684 containerd[1468]: 2025-11-01 00:35:52.056 [INFO][4844] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" iface="eth0" netns="/var/run/netns/cni-d0e3637d-6a17-95fd-2fb8-f4e506a0cc31" Nov 1 00:35:52.089684 containerd[1468]: 2025-11-01 00:35:52.056 [INFO][4844] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" iface="eth0" netns="/var/run/netns/cni-d0e3637d-6a17-95fd-2fb8-f4e506a0cc31" Nov 1 00:35:52.089684 containerd[1468]: 2025-11-01 00:35:52.057 [INFO][4844] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" iface="eth0" netns="/var/run/netns/cni-d0e3637d-6a17-95fd-2fb8-f4e506a0cc31" Nov 1 00:35:52.089684 containerd[1468]: 2025-11-01 00:35:52.057 [INFO][4844] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Nov 1 00:35:52.089684 containerd[1468]: 2025-11-01 00:35:52.057 [INFO][4844] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Nov 1 00:35:52.089684 containerd[1468]: 2025-11-01 00:35:52.077 [INFO][4852] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" HandleID="k8s-pod-network.e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Workload="localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0" Nov 1 00:35:52.089684 containerd[1468]: 2025-11-01 00:35:52.077 [INFO][4852] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:35:52.089684 containerd[1468]: 2025-11-01 00:35:52.077 [INFO][4852] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:35:52.089684 containerd[1468]: 2025-11-01 00:35:52.081 [WARNING][4852] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" HandleID="k8s-pod-network.e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Workload="localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0" Nov 1 00:35:52.089684 containerd[1468]: 2025-11-01 00:35:52.081 [INFO][4852] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" HandleID="k8s-pod-network.e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Workload="localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0" Nov 1 00:35:52.089684 containerd[1468]: 2025-11-01 00:35:52.083 [INFO][4852] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:35:52.089684 containerd[1468]: 2025-11-01 00:35:52.086 [INFO][4844] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Nov 1 00:35:52.090086 containerd[1468]: time="2025-11-01T00:35:52.089840353Z" level=info msg="TearDown network for sandbox \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\" successfully" Nov 1 00:35:52.090086 containerd[1468]: time="2025-11-01T00:35:52.089868848Z" level=info msg="StopPodSandbox for \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\" returns successfully" Nov 1 00:35:52.090223 kubelet[2499]: E1101 00:35:52.090199 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:52.090700 containerd[1468]: time="2025-11-01T00:35:52.090579278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g4fmz,Uid:8731e9b0-7c90-4504-b50f-7b034a8b8a07,Namespace:kube-system,Attempt:1,}" Nov 1 00:35:52.092946 systemd[1]: run-netns-cni\x2dd0e3637d\x2d6a17\x2d95fd\x2d2fb8\x2df4e506a0cc31.mount: Deactivated successfully. Nov 1 00:35:52.116717 containerd[1468]: time="2025-11-01T00:35:52.116667428Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:35:52.117838 containerd[1468]: time="2025-11-01T00:35:52.117779403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:35:52.118016 containerd[1468]: time="2025-11-01T00:35:52.117853636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:35:52.118049 kubelet[2499]: E1101 00:35:52.117970 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:35:52.118049 kubelet[2499]: E1101 00:35:52.118015 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:35:52.118171 kubelet[2499]: E1101 00:35:52.118132 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jffx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68fc7bb9b7-tvhcs_calico-apiserver(d57a8509-e37c-4d69-93aa-35fdadef5de6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:35:52.119501 kubelet[2499]: E1101 00:35:52.119445 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-tvhcs" podUID="d57a8509-e37c-4d69-93aa-35fdadef5de6" Nov 1 00:35:52.149225 kubelet[2499]: E1101 00:35:52.149153 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-tvhcs" podUID="d57a8509-e37c-4d69-93aa-35fdadef5de6" Nov 1 00:35:52.155024 kubelet[2499]: E1101 00:35:52.154996 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:52.164343 kubelet[2499]: E1101 00:35:52.164270 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-c7qgt" podUID="42b9da1b-c5f5-468c-9b0b-bd955feccb34" Nov 1 00:35:52.164343 kubelet[2499]: E1101 00:35:52.164343 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rlz6p" podUID="331a1960-88ad-4608-9f70-708ee400d030" Nov 1 00:35:52.178507 kubelet[2499]: I1101 00:35:52.178447 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ct4jw" podStartSLOduration=36.178433052 podStartE2EDuration="36.178433052s" podCreationTimestamp="2025-11-01 00:35:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:35:52.175570862 +0000 UTC m=+42.250181075" watchObservedRunningTime="2025-11-01 00:35:52.178433052 +0000 UTC m=+42.253043265" Nov 1 00:35:52.210975 systemd-networkd[1389]: cali283072d4826: Link UP Nov 1 00:35:52.211161 systemd-networkd[1389]: cali283072d4826: Gained carrier Nov 1 00:35:52.221144 containerd[1468]: 2025-11-01 00:35:52.124 [INFO][4861] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:35:52.221144 containerd[1468]: 2025-11-01 00:35:52.133 [INFO][4861] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0 coredns-668d6bf9bc- kube-system 8731e9b0-7c90-4504-b50f-7b034a8b8a07 1099 0 2025-11-01 00:35:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-g4fmz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali283072d4826 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4fmz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g4fmz-" Nov 1 00:35:52.221144 containerd[1468]: 2025-11-01 00:35:52.133 [INFO][4861] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4fmz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0" Nov 1 00:35:52.221144 containerd[1468]: 2025-11-01 00:35:52.162 [INFO][4877] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280" HandleID="k8s-pod-network.98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280" Workload="localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0" Nov 1 00:35:52.221144 containerd[1468]: 2025-11-01 00:35:52.162 [INFO][4877] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280" HandleID="k8s-pod-network.98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280" Workload="localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e920), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-g4fmz", "timestamp":"2025-11-01 00:35:52.162181731 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:35:52.221144 containerd[1468]: 2025-11-01 00:35:52.162 [INFO][4877] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:35:52.221144 containerd[1468]: 2025-11-01 00:35:52.162 [INFO][4877] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:35:52.221144 containerd[1468]: 2025-11-01 00:35:52.162 [INFO][4877] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:35:52.221144 containerd[1468]: 2025-11-01 00:35:52.168 [INFO][4877] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280" host="localhost" Nov 1 00:35:52.221144 containerd[1468]: 2025-11-01 00:35:52.173 [INFO][4877] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:35:52.221144 containerd[1468]: 2025-11-01 00:35:52.182 [INFO][4877] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:35:52.221144 containerd[1468]: 2025-11-01 00:35:52.185 [INFO][4877] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:35:52.221144 containerd[1468]: 2025-11-01 00:35:52.190 [INFO][4877] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:35:52.221144 containerd[1468]: 2025-11-01 00:35:52.190 [INFO][4877] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280" host="localhost" Nov 1 00:35:52.221144 containerd[1468]: 2025-11-01 00:35:52.191 [INFO][4877] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280 Nov 1 00:35:52.221144 containerd[1468]: 2025-11-01 00:35:52.195 [INFO][4877] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280" host="localhost" Nov 1 00:35:52.221144 containerd[1468]: 2025-11-01 00:35:52.202 [INFO][4877] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280" host="localhost" Nov 1 00:35:52.221144 containerd[1468]: 2025-11-01 00:35:52.202 [INFO][4877] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280" host="localhost" Nov 1 00:35:52.221144 containerd[1468]: 2025-11-01 00:35:52.202 [INFO][4877] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:35:52.221144 containerd[1468]: 2025-11-01 00:35:52.202 [INFO][4877] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280" HandleID="k8s-pod-network.98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280" Workload="localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0" Nov 1 00:35:52.221742 containerd[1468]: 2025-11-01 00:35:52.206 [INFO][4861] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4fmz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8731e9b0-7c90-4504-b50f-7b034a8b8a07", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-g4fmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali283072d4826", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:35:52.221742 containerd[1468]: 2025-11-01 00:35:52.206 [INFO][4861] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4fmz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0" Nov 1 00:35:52.221742 containerd[1468]: 2025-11-01 00:35:52.206 [INFO][4861] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali283072d4826 ContainerID="98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4fmz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0" Nov 1 00:35:52.221742 containerd[1468]: 2025-11-01 00:35:52.208 [INFO][4861] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4fmz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0" Nov 1 00:35:52.221742 containerd[1468]: 2025-11-01 00:35:52.209 [INFO][4861] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4fmz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8731e9b0-7c90-4504-b50f-7b034a8b8a07", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280", Pod:"coredns-668d6bf9bc-g4fmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali283072d4826", MAC:"6e:26:e2:37:1f:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:35:52.221742 containerd[1468]: 2025-11-01 00:35:52.216 [INFO][4861] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4fmz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0" Nov 1 00:35:52.240686 containerd[1468]: time="2025-11-01T00:35:52.240571173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:35:52.240686 containerd[1468]: time="2025-11-01T00:35:52.240649705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:35:52.240686 containerd[1468]: time="2025-11-01T00:35:52.240660876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:52.241026 containerd[1468]: time="2025-11-01T00:35:52.240759657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:35:52.263773 systemd[1]: Started cri-containerd-98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280.scope - libcontainer container 98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280. Nov 1 00:35:52.276536 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:35:52.300416 containerd[1468]: time="2025-11-01T00:35:52.300375985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g4fmz,Uid:8731e9b0-7c90-4504-b50f-7b034a8b8a07,Namespace:kube-system,Attempt:1,} returns sandbox id \"98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280\"" Nov 1 00:35:52.301126 kubelet[2499]: E1101 00:35:52.301093 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:52.303382 containerd[1468]: time="2025-11-01T00:35:52.303336093Z" level=info msg="CreateContainer within sandbox \"98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:35:52.322695 containerd[1468]: time="2025-11-01T00:35:52.322649008Z" level=info msg="CreateContainer within sandbox \"98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7d574afb564f598782d66aed53e33e790096c50a23ac811d8b426c4e912a2215\"" Nov 1 00:35:52.323078 containerd[1468]: time="2025-11-01T00:35:52.323046985Z" level=info msg="StartContainer for \"7d574afb564f598782d66aed53e33e790096c50a23ac811d8b426c4e912a2215\"" Nov 1 00:35:52.350772 systemd[1]: Started cri-containerd-7d574afb564f598782d66aed53e33e790096c50a23ac811d8b426c4e912a2215.scope - libcontainer container 7d574afb564f598782d66aed53e33e790096c50a23ac811d8b426c4e912a2215. Nov 1 00:35:52.376374 containerd[1468]: time="2025-11-01T00:35:52.376344799Z" level=info msg="StartContainer for \"7d574afb564f598782d66aed53e33e790096c50a23ac811d8b426c4e912a2215\" returns successfully" Nov 1 00:35:52.721741 systemd-networkd[1389]: calia3cd08bb14c: Gained IPv6LL Nov 1 00:35:52.902426 systemd[1]: Started sshd@8-10.0.0.5:22-10.0.0.1:33652.service - OpenSSH per-connection server daemon (10.0.0.1:33652). Nov 1 00:35:52.942838 sshd[5002]: Accepted publickey for core from 10.0.0.1 port 33652 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:35:52.944788 sshd[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:35:52.948931 systemd-logind[1455]: New session 9 of user core. Nov 1 00:35:52.956731 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 00:35:53.080919 sshd[5002]: pam_unix(sshd:session): session closed for user core Nov 1 00:35:53.085159 systemd[1]: sshd@8-10.0.0.5:22-10.0.0.1:33652.service: Deactivated successfully. Nov 1 00:35:53.087226 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:35:53.087961 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:35:53.088784 systemd-logind[1455]: Removed session 9. Nov 1 00:35:53.105741 systemd-networkd[1389]: calibb72cf79e7a: Gained IPv6LL Nov 1 00:35:53.165014 kubelet[2499]: E1101 00:35:53.164972 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:53.165431 kubelet[2499]: E1101 00:35:53.165093 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:53.165923 kubelet[2499]: E1101 00:35:53.165884 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rlz6p" podUID="331a1960-88ad-4608-9f70-708ee400d030" Nov 1 00:35:53.166086 kubelet[2499]: E1101 00:35:53.165948 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-tvhcs" podUID="d57a8509-e37c-4d69-93aa-35fdadef5de6" Nov 1 00:35:53.170773 systemd-networkd[1389]: cali91bba4e6801: Gained IPv6LL Nov 1 00:35:53.190208 kubelet[2499]: I1101 00:35:53.189588 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-g4fmz" podStartSLOduration=37.189541634 podStartE2EDuration="37.189541634s" podCreationTimestamp="2025-11-01 00:35:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:35:53.181294441 +0000 UTC m=+43.255904654" watchObservedRunningTime="2025-11-01 00:35:53.189541634 +0000 UTC m=+43.264151847" Nov 1 00:35:53.361765 systemd-networkd[1389]: cali283072d4826: Gained IPv6LL Nov 1 00:35:54.166882 kubelet[2499]: E1101 00:35:54.166851 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:54.167683 kubelet[2499]: E1101 00:35:54.166965 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:55.168466 kubelet[2499]: E1101 00:35:55.168390 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:56.162250 kubelet[2499]: I1101 00:35:56.162215 2499 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:35:56.162629 kubelet[2499]: E1101 00:35:56.162589 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:56.169746 kubelet[2499]: E1101 00:35:56.169730 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:56.170104 kubelet[2499]: E1101 00:35:56.170002 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:35:56.730623 kernel: bpftool[5121]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 00:35:56.961385 systemd-networkd[1389]: vxlan.calico: Link UP Nov 1 00:35:56.961397 systemd-networkd[1389]: vxlan.calico: Gained carrier Nov 1 00:35:58.096688 systemd[1]: Started sshd@9-10.0.0.5:22-10.0.0.1:34202.service - OpenSSH per-connection server daemon (10.0.0.1:34202). Nov 1 00:35:58.136999 sshd[5235]: Accepted publickey for core from 10.0.0.1 port 34202 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:35:58.138566 sshd[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:35:58.142352 systemd-logind[1455]: New session 10 of user core. Nov 1 00:35:58.148718 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 00:35:58.229670 systemd-networkd[1389]: vxlan.calico: Gained IPv6LL Nov 1 00:35:58.267131 sshd[5235]: pam_unix(sshd:session): session closed for user core Nov 1 00:35:58.274236 systemd[1]: sshd@9-10.0.0.5:22-10.0.0.1:34202.service: Deactivated successfully. Nov 1 00:35:58.275837 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:35:58.277193 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:35:58.284838 systemd[1]: Started sshd@10-10.0.0.5:22-10.0.0.1:34204.service - OpenSSH per-connection server daemon (10.0.0.1:34204). Nov 1 00:35:58.285607 systemd-logind[1455]: Removed session 10. Nov 1 00:35:58.315488 sshd[5253]: Accepted publickey for core from 10.0.0.1 port 34204 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:35:58.316968 sshd[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:35:58.320650 systemd-logind[1455]: New session 11 of user core. Nov 1 00:35:58.329712 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 00:35:58.464547 sshd[5253]: pam_unix(sshd:session): session closed for user core Nov 1 00:35:58.472958 systemd[1]: sshd@10-10.0.0.5:22-10.0.0.1:34204.service: Deactivated successfully. Nov 1 00:35:58.474738 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:35:58.477663 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:35:58.483299 systemd[1]: Started sshd@11-10.0.0.5:22-10.0.0.1:34208.service - OpenSSH per-connection server daemon (10.0.0.1:34208). Nov 1 00:35:58.485012 systemd-logind[1455]: Removed session 11. Nov 1 00:35:58.518004 sshd[5265]: Accepted publickey for core from 10.0.0.1 port 34208 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:35:58.519734 sshd[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:35:58.523513 systemd-logind[1455]: New session 12 of user core. Nov 1 00:35:58.533711 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 00:35:58.644900 sshd[5265]: pam_unix(sshd:session): session closed for user core Nov 1 00:35:58.648464 systemd[1]: sshd@11-10.0.0.5:22-10.0.0.1:34208.service: Deactivated successfully. Nov 1 00:35:58.650344 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:35:58.651027 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:35:58.651989 systemd-logind[1455]: Removed session 12. Nov 1 00:36:00.030177 containerd[1468]: time="2025-11-01T00:36:00.029738467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:36:00.349771 containerd[1468]: time="2025-11-01T00:36:00.349657459Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:36:00.350970 containerd[1468]: time="2025-11-01T00:36:00.350910232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:36:00.351113 containerd[1468]: time="2025-11-01T00:36:00.350982771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:36:00.351163 kubelet[2499]: E1101 00:36:00.351082 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:36:00.351163 kubelet[2499]: E1101 00:36:00.351121 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:36:00.351625 kubelet[2499]: E1101 00:36:00.351226 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:79277034d2d74e8eb716aae70187f367,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mmc6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f6ff4bc47-cjjhn_calico-system(d166a932-62b2-424c-af81-b672793d3ad2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:36:00.353057 containerd[1468]: time="2025-11-01T00:36:00.353017524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:36:00.705433 containerd[1468]: time="2025-11-01T00:36:00.705373532Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:36:00.706491 containerd[1468]: time="2025-11-01T00:36:00.706450447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:36:00.706556 containerd[1468]: time="2025-11-01T00:36:00.706486877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:36:00.706726 kubelet[2499]: E1101 00:36:00.706677 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:36:00.706785 kubelet[2499]: E1101 00:36:00.706739 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:36:00.706931 kubelet[2499]: E1101 00:36:00.706877 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmc6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f6ff4bc47-cjjhn_calico-system(d166a932-62b2-424c-af81-b672793d3ad2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:36:00.708781 kubelet[2499]: E1101 00:36:00.708050 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f6ff4bc47-cjjhn" podUID="d166a932-62b2-424c-af81-b672793d3ad2" Nov 1 00:36:03.659644 systemd[1]: Started sshd@12-10.0.0.5:22-10.0.0.1:53024.service - OpenSSH per-connection server daemon (10.0.0.1:53024). Nov 1 00:36:03.693917 sshd[5289]: Accepted publickey for core from 10.0.0.1 port 53024 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:36:03.695373 sshd[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:36:03.699230 systemd-logind[1455]: New session 13 of user core. Nov 1 00:36:03.702804 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 00:36:03.845229 sshd[5289]: pam_unix(sshd:session): session closed for user core Nov 1 00:36:03.849429 systemd[1]: sshd@12-10.0.0.5:22-10.0.0.1:53024.service: Deactivated successfully. Nov 1 00:36:03.851475 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:36:03.852100 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:36:03.852963 systemd-logind[1455]: Removed session 13. Nov 1 00:36:04.013529 containerd[1468]: time="2025-11-01T00:36:04.013264261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:36:04.476614 containerd[1468]: time="2025-11-01T00:36:04.476545138Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:36:04.477818 containerd[1468]: time="2025-11-01T00:36:04.477760064Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:36:04.477986 containerd[1468]: time="2025-11-01T00:36:04.477832912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:36:04.478015 kubelet[2499]: E1101 00:36:04.477974 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:36:04.478378 kubelet[2499]: E1101 00:36:04.478016 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:36:04.478378 kubelet[2499]: E1101 00:36:04.478197 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-278m6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jzfns_calico-system(31c28b53-e76c-45d5-b66c-cb1d82d504b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:36:04.478491 containerd[1468]: time="2025-11-01T00:36:04.478303343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:36:04.793217 containerd[1468]: time="2025-11-01T00:36:04.793089638Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:36:04.794198 containerd[1468]: time="2025-11-01T00:36:04.794165577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:36:04.794293 containerd[1468]: time="2025-11-01T00:36:04.794229268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:36:04.794391 kubelet[2499]: E1101 00:36:04.794351 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:36:04.794437 kubelet[2499]: E1101 00:36:04.794402 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:36:04.794705 kubelet[2499]: E1101 00:36:04.794639 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4vct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68fc7bb9b7-c7qgt_calico-apiserver(42b9da1b-c5f5-468c-9b0b-bd955feccb34): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:36:04.794876 containerd[1468]: time="2025-11-01T00:36:04.794711271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:36:04.796193 kubelet[2499]: E1101 00:36:04.796149 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-c7qgt" podUID="42b9da1b-c5f5-468c-9b0b-bd955feccb34" Nov 1 00:36:05.118927 containerd[1468]: time="2025-11-01T00:36:05.118800833Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:36:05.119942 containerd[1468]: time="2025-11-01T00:36:05.119891960Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:36:05.120004 containerd[1468]: time="2025-11-01T00:36:05.119957154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:36:05.120071 kubelet[2499]: E1101 00:36:05.120036 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:36:05.120116 kubelet[2499]: E1101 00:36:05.120078 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:36:05.120350 kubelet[2499]: E1101 00:36:05.120274 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cxdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64f94746cd-5r8bx_calico-system(4ca70b04-3681-42b1-b3b8-746e67038cfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:36:05.120540 containerd[1468]: time="2025-11-01T00:36:05.120333124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:36:05.121752 kubelet[2499]: E1101 00:36:05.121709 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f94746cd-5r8bx" podUID="4ca70b04-3681-42b1-b3b8-746e67038cfe" Nov 1 00:36:05.448292 containerd[1468]: time="2025-11-01T00:36:05.448259341Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:36:05.449354 containerd[1468]: time="2025-11-01T00:36:05.449309379Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:36:05.449480 containerd[1468]: time="2025-11-01T00:36:05.449362451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:36:05.449523 kubelet[2499]: E1101 00:36:05.449477 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:36:05.449562 kubelet[2499]: E1101 00:36:05.449524 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:36:05.449790 kubelet[2499]: E1101 00:36:05.449731 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-278m6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jzfns_calico-system(31c28b53-e76c-45d5-b66c-cb1d82d504b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:36:05.449997 containerd[1468]: time="2025-11-01T00:36:05.449862998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:36:05.451683 kubelet[2499]: E1101 00:36:05.451646 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jzfns" podUID="31c28b53-e76c-45d5-b66c-cb1d82d504b6" Nov 1 00:36:05.764983 containerd[1468]: time="2025-11-01T00:36:05.764897014Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:36:05.765959 containerd[1468]: time="2025-11-01T00:36:05.765921133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:36:05.766021 containerd[1468]: time="2025-11-01T00:36:05.765985345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:36:05.766109 kubelet[2499]: E1101 00:36:05.766080 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:36:05.766422 kubelet[2499]: E1101 00:36:05.766115 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:36:05.766422 kubelet[2499]: E1101 00:36:05.766343 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-djwph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rlz6p_calico-system(331a1960-88ad-4608-9f70-708ee400d030): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:36:05.766747 containerd[1468]: time="2025-11-01T00:36:05.766713549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:36:05.767881 kubelet[2499]: E1101 00:36:05.767822 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rlz6p" podUID="331a1960-88ad-4608-9f70-708ee400d030" Nov 1 00:36:06.100727 containerd[1468]: time="2025-11-01T00:36:06.100581635Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:36:06.101738 containerd[1468]: time="2025-11-01T00:36:06.101707386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:36:06.101800 containerd[1468]: time="2025-11-01T00:36:06.101770797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:36:06.101952 kubelet[2499]: E1101 00:36:06.101908 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:36:06.102004 kubelet[2499]: E1101 00:36:06.101956 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:36:06.102151 kubelet[2499]: E1101 00:36:06.102109 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jffx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68fc7bb9b7-tvhcs_calico-apiserver(d57a8509-e37c-4d69-93aa-35fdadef5de6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:36:06.103830 kubelet[2499]: E1101 00:36:06.103504 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-tvhcs" podUID="d57a8509-e37c-4d69-93aa-35fdadef5de6" Nov 1 00:36:08.863628 systemd[1]: Started sshd@13-10.0.0.5:22-10.0.0.1:53032.service - OpenSSH per-connection server daemon (10.0.0.1:53032). Nov 1 00:36:08.901827 sshd[5315]: Accepted publickey for core from 10.0.0.1 port 53032 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:36:08.903425 sshd[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:36:08.907211 systemd-logind[1455]: New session 14 of user core. Nov 1 00:36:08.913809 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 00:36:09.020220 sshd[5315]: pam_unix(sshd:session): session closed for user core Nov 1 00:36:09.024340 systemd[1]: sshd@13-10.0.0.5:22-10.0.0.1:53032.service: Deactivated successfully. Nov 1 00:36:09.026436 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:36:09.027365 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:36:09.028447 systemd-logind[1455]: Removed session 14. Nov 1 00:36:09.991697 containerd[1468]: time="2025-11-01T00:36:09.991536710Z" level=info msg="StopPodSandbox for \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\"" Nov 1 00:36:10.199126 containerd[1468]: 2025-11-01 00:36:10.160 [WARNING][5338] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8731e9b0-7c90-4504-b50f-7b034a8b8a07", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280", Pod:"coredns-668d6bf9bc-g4fmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali283072d4826", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:36:10.199126 containerd[1468]: 2025-11-01 00:36:10.161 [INFO][5338] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Nov 1 00:36:10.199126 containerd[1468]: 2025-11-01 00:36:10.161 [INFO][5338] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" iface="eth0" netns="" Nov 1 00:36:10.199126 containerd[1468]: 2025-11-01 00:36:10.161 [INFO][5338] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Nov 1 00:36:10.199126 containerd[1468]: 2025-11-01 00:36:10.161 [INFO][5338] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Nov 1 00:36:10.199126 containerd[1468]: 2025-11-01 00:36:10.184 [INFO][5349] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" HandleID="k8s-pod-network.e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Workload="localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0" Nov 1 00:36:10.199126 containerd[1468]: 2025-11-01 00:36:10.184 [INFO][5349] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:36:10.199126 containerd[1468]: 2025-11-01 00:36:10.184 [INFO][5349] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:36:10.199126 containerd[1468]: 2025-11-01 00:36:10.190 [WARNING][5349] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" HandleID="k8s-pod-network.e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Workload="localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0" Nov 1 00:36:10.199126 containerd[1468]: 2025-11-01 00:36:10.190 [INFO][5349] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" HandleID="k8s-pod-network.e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Workload="localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0" Nov 1 00:36:10.199126 containerd[1468]: 2025-11-01 00:36:10.191 [INFO][5349] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:36:10.199126 containerd[1468]: 2025-11-01 00:36:10.194 [INFO][5338] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Nov 1 00:36:10.199704 containerd[1468]: time="2025-11-01T00:36:10.199169824Z" level=info msg="TearDown network for sandbox \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\" successfully" Nov 1 00:36:10.199704 containerd[1468]: time="2025-11-01T00:36:10.199200011Z" level=info msg="StopPodSandbox for \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\" returns successfully" Nov 1 00:36:10.199884 containerd[1468]: time="2025-11-01T00:36:10.199846966Z" level=info msg="RemovePodSandbox for \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\"" Nov 1 00:36:10.202315 containerd[1468]: time="2025-11-01T00:36:10.202255131Z" level=info msg="Forcibly stopping sandbox \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\"" Nov 1 00:36:10.265759 containerd[1468]: 2025-11-01 00:36:10.233 [WARNING][5367] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8731e9b0-7c90-4504-b50f-7b034a8b8a07", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98366ee8a090f36445f4f1786c6e7ad23893047e6864686b57c8b8a64a07c280", Pod:"coredns-668d6bf9bc-g4fmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali283072d4826", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:36:10.265759 containerd[1468]: 2025-11-01 00:36:10.233 [INFO][5367] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Nov 1 00:36:10.265759 containerd[1468]: 2025-11-01 00:36:10.233 [INFO][5367] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" iface="eth0" netns="" Nov 1 00:36:10.265759 containerd[1468]: 2025-11-01 00:36:10.233 [INFO][5367] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Nov 1 00:36:10.265759 containerd[1468]: 2025-11-01 00:36:10.233 [INFO][5367] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Nov 1 00:36:10.265759 containerd[1468]: 2025-11-01 00:36:10.255 [INFO][5376] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" HandleID="k8s-pod-network.e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Workload="localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0" Nov 1 00:36:10.265759 containerd[1468]: 2025-11-01 00:36:10.255 [INFO][5376] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:36:10.265759 containerd[1468]: 2025-11-01 00:36:10.255 [INFO][5376] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:36:10.265759 containerd[1468]: 2025-11-01 00:36:10.259 [WARNING][5376] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" HandleID="k8s-pod-network.e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Workload="localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0" Nov 1 00:36:10.265759 containerd[1468]: 2025-11-01 00:36:10.259 [INFO][5376] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" HandleID="k8s-pod-network.e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Workload="localhost-k8s-coredns--668d6bf9bc--g4fmz-eth0" Nov 1 00:36:10.265759 containerd[1468]: 2025-11-01 00:36:10.260 [INFO][5376] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:36:10.265759 containerd[1468]: 2025-11-01 00:36:10.263 [INFO][5367] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b" Nov 1 00:36:10.265759 containerd[1468]: time="2025-11-01T00:36:10.265726441Z" level=info msg="TearDown network for sandbox \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\" successfully" Nov 1 00:36:10.289858 containerd[1468]: time="2025-11-01T00:36:10.289805099Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:36:10.289858 containerd[1468]: time="2025-11-01T00:36:10.289876926Z" level=info msg="RemovePodSandbox \"e7684fca931ff636aee3d44dc3c5a33e8fb08e4305080ca5a137ee70dedf723b\" returns successfully" Nov 1 00:36:10.290549 containerd[1468]: time="2025-11-01T00:36:10.290510565Z" level=info msg="StopPodSandbox for \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\"" Nov 1 00:36:10.360073 containerd[1468]: 2025-11-01 00:36:10.324 [WARNING][5393] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"02d687a0-8306-485c-897b-e3fc603e4632", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9", Pod:"coredns-668d6bf9bc-ct4jw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3cd08bb14c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:36:10.360073 containerd[1468]: 2025-11-01 00:36:10.325 [INFO][5393] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Nov 1 00:36:10.360073 containerd[1468]: 2025-11-01 00:36:10.325 [INFO][5393] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" iface="eth0" netns="" Nov 1 00:36:10.360073 containerd[1468]: 2025-11-01 00:36:10.325 [INFO][5393] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Nov 1 00:36:10.360073 containerd[1468]: 2025-11-01 00:36:10.325 [INFO][5393] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Nov 1 00:36:10.360073 containerd[1468]: 2025-11-01 00:36:10.348 [INFO][5401] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" HandleID="k8s-pod-network.5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Workload="localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0" Nov 1 00:36:10.360073 containerd[1468]: 2025-11-01 00:36:10.348 [INFO][5401] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:36:10.360073 containerd[1468]: 2025-11-01 00:36:10.348 [INFO][5401] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:36:10.360073 containerd[1468]: 2025-11-01 00:36:10.354 [WARNING][5401] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" HandleID="k8s-pod-network.5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Workload="localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0" Nov 1 00:36:10.360073 containerd[1468]: 2025-11-01 00:36:10.354 [INFO][5401] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" HandleID="k8s-pod-network.5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Workload="localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0" Nov 1 00:36:10.360073 containerd[1468]: 2025-11-01 00:36:10.355 [INFO][5401] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:36:10.360073 containerd[1468]: 2025-11-01 00:36:10.357 [INFO][5393] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Nov 1 00:36:10.360482 containerd[1468]: time="2025-11-01T00:36:10.360130849Z" level=info msg="TearDown network for sandbox \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\" successfully" Nov 1 00:36:10.360482 containerd[1468]: time="2025-11-01T00:36:10.360160716Z" level=info msg="StopPodSandbox for \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\" returns successfully" Nov 1 00:36:10.360825 containerd[1468]: time="2025-11-01T00:36:10.360790608Z" level=info msg="RemovePodSandbox for \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\"" Nov 1 00:36:10.360858 containerd[1468]: time="2025-11-01T00:36:10.360836585Z" level=info msg="Forcibly stopping sandbox \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\"" Nov 1 00:36:10.423502 containerd[1468]: 2025-11-01 00:36:10.391 [WARNING][5419] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"02d687a0-8306-485c-897b-e3fc603e4632", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0e6077a68cf4f4b38443467900d4abd91d35a2e71c052c5e9fc3cc1fb6b45d9", Pod:"coredns-668d6bf9bc-ct4jw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3cd08bb14c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:36:10.423502 containerd[1468]: 2025-11-01 00:36:10.392 [INFO][5419] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Nov 1 00:36:10.423502 containerd[1468]: 2025-11-01 00:36:10.392 [INFO][5419] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" iface="eth0" netns="" Nov 1 00:36:10.423502 containerd[1468]: 2025-11-01 00:36:10.392 [INFO][5419] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Nov 1 00:36:10.423502 containerd[1468]: 2025-11-01 00:36:10.392 [INFO][5419] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Nov 1 00:36:10.423502 containerd[1468]: 2025-11-01 00:36:10.412 [INFO][5427] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" HandleID="k8s-pod-network.5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Workload="localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0" Nov 1 00:36:10.423502 containerd[1468]: 2025-11-01 00:36:10.412 [INFO][5427] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:36:10.423502 containerd[1468]: 2025-11-01 00:36:10.412 [INFO][5427] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:36:10.423502 containerd[1468]: 2025-11-01 00:36:10.417 [WARNING][5427] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" HandleID="k8s-pod-network.5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Workload="localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0" Nov 1 00:36:10.423502 containerd[1468]: 2025-11-01 00:36:10.417 [INFO][5427] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" HandleID="k8s-pod-network.5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Workload="localhost-k8s-coredns--668d6bf9bc--ct4jw-eth0" Nov 1 00:36:10.423502 containerd[1468]: 2025-11-01 00:36:10.418 [INFO][5427] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:36:10.423502 containerd[1468]: 2025-11-01 00:36:10.421 [INFO][5419] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c" Nov 1 00:36:10.424012 containerd[1468]: time="2025-11-01T00:36:10.423537845Z" level=info msg="TearDown network for sandbox \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\" successfully" Nov 1 00:36:10.427123 containerd[1468]: time="2025-11-01T00:36:10.427102387Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:36:10.427169 containerd[1468]: time="2025-11-01T00:36:10.427138726Z" level=info msg="RemovePodSandbox \"5f506e15af8a37af35d2c998f0e1d54a4af2c491037b6767d4c407fbf16cdb5c\" returns successfully" Nov 1 00:36:10.427686 containerd[1468]: time="2025-11-01T00:36:10.427669419Z" level=info msg="StopPodSandbox for \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\"" Nov 1 00:36:10.488691 containerd[1468]: 2025-11-01 00:36:10.459 [WARNING][5445] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0", GenerateName:"calico-apiserver-68fc7bb9b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"42b9da1b-c5f5-468c-9b0b-bd955feccb34", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68fc7bb9b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5", Pod:"calico-apiserver-68fc7bb9b7-c7qgt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97ecd072bb2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:36:10.488691 containerd[1468]: 2025-11-01 00:36:10.459 [INFO][5445] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Nov 1 00:36:10.488691 containerd[1468]: 2025-11-01 00:36:10.459 [INFO][5445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" iface="eth0" netns="" Nov 1 00:36:10.488691 containerd[1468]: 2025-11-01 00:36:10.459 [INFO][5445] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Nov 1 00:36:10.488691 containerd[1468]: 2025-11-01 00:36:10.459 [INFO][5445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Nov 1 00:36:10.488691 containerd[1468]: 2025-11-01 00:36:10.477 [INFO][5453] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" HandleID="k8s-pod-network.50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0" Nov 1 00:36:10.488691 containerd[1468]: 2025-11-01 00:36:10.477 [INFO][5453] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:36:10.488691 containerd[1468]: 2025-11-01 00:36:10.477 [INFO][5453] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:36:10.488691 containerd[1468]: 2025-11-01 00:36:10.482 [WARNING][5453] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" HandleID="k8s-pod-network.50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0" Nov 1 00:36:10.488691 containerd[1468]: 2025-11-01 00:36:10.482 [INFO][5453] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" HandleID="k8s-pod-network.50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0" Nov 1 00:36:10.488691 containerd[1468]: 2025-11-01 00:36:10.483 [INFO][5453] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:36:10.488691 containerd[1468]: 2025-11-01 00:36:10.486 [INFO][5445] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Nov 1 00:36:10.489095 containerd[1468]: time="2025-11-01T00:36:10.488732963Z" level=info msg="TearDown network for sandbox \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\" successfully" Nov 1 00:36:10.489095 containerd[1468]: time="2025-11-01T00:36:10.488763011Z" level=info msg="StopPodSandbox for \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\" returns successfully" Nov 1 00:36:10.489283 containerd[1468]: time="2025-11-01T00:36:10.489256423Z" level=info msg="RemovePodSandbox for \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\"" Nov 1 00:36:10.489306 containerd[1468]: time="2025-11-01T00:36:10.489289496Z" level=info msg="Forcibly stopping sandbox \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\"" Nov 1 00:36:10.552506 containerd[1468]: 2025-11-01 00:36:10.523 [WARNING][5471] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0", GenerateName:"calico-apiserver-68fc7bb9b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"42b9da1b-c5f5-468c-9b0b-bd955feccb34", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68fc7bb9b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"27c29aef5899575b828ed2189f79cefe1719bf0dee452250efad74afad6eebb5", Pod:"calico-apiserver-68fc7bb9b7-c7qgt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97ecd072bb2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:36:10.552506 containerd[1468]: 2025-11-01 00:36:10.523 [INFO][5471] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Nov 1 00:36:10.552506 containerd[1468]: 2025-11-01 00:36:10.523 [INFO][5471] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" iface="eth0" netns="" Nov 1 00:36:10.552506 containerd[1468]: 2025-11-01 00:36:10.523 [INFO][5471] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Nov 1 00:36:10.552506 containerd[1468]: 2025-11-01 00:36:10.523 [INFO][5471] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Nov 1 00:36:10.552506 containerd[1468]: 2025-11-01 00:36:10.541 [INFO][5480] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" HandleID="k8s-pod-network.50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0" Nov 1 00:36:10.552506 containerd[1468]: 2025-11-01 00:36:10.541 [INFO][5480] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:36:10.552506 containerd[1468]: 2025-11-01 00:36:10.541 [INFO][5480] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:36:10.552506 containerd[1468]: 2025-11-01 00:36:10.546 [WARNING][5480] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" HandleID="k8s-pod-network.50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0" Nov 1 00:36:10.552506 containerd[1468]: 2025-11-01 00:36:10.546 [INFO][5480] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" HandleID="k8s-pod-network.50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--c7qgt-eth0" Nov 1 00:36:10.552506 containerd[1468]: 2025-11-01 00:36:10.547 [INFO][5480] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:36:10.552506 containerd[1468]: 2025-11-01 00:36:10.549 [INFO][5471] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d" Nov 1 00:36:10.552506 containerd[1468]: time="2025-11-01T00:36:10.552453158Z" level=info msg="TearDown network for sandbox \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\" successfully" Nov 1 00:36:10.556152 containerd[1468]: time="2025-11-01T00:36:10.556098224Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:36:10.556276 containerd[1468]: time="2025-11-01T00:36:10.556172906Z" level=info msg="RemovePodSandbox \"50355b20d54218d2344642bda3aa13a6a9bf22feb569c1554b361342ce4e135d\" returns successfully" Nov 1 00:36:10.562382 containerd[1468]: time="2025-11-01T00:36:10.562347048Z" level=info msg="StopPodSandbox for \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\"" Nov 1 00:36:10.625240 containerd[1468]: 2025-11-01 00:36:10.591 [WARNING][5498] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" WorkloadEndpoint="localhost-k8s-whisker--59bc9c756c--z94hk-eth0" Nov 1 00:36:10.625240 containerd[1468]: 2025-11-01 00:36:10.592 [INFO][5498] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Nov 1 00:36:10.625240 containerd[1468]: 2025-11-01 00:36:10.592 [INFO][5498] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" iface="eth0" netns="" Nov 1 00:36:10.625240 containerd[1468]: 2025-11-01 00:36:10.592 [INFO][5498] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Nov 1 00:36:10.625240 containerd[1468]: 2025-11-01 00:36:10.592 [INFO][5498] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Nov 1 00:36:10.625240 containerd[1468]: 2025-11-01 00:36:10.612 [INFO][5507] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" HandleID="k8s-pod-network.fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Workload="localhost-k8s-whisker--59bc9c756c--z94hk-eth0" Nov 1 00:36:10.625240 containerd[1468]: 2025-11-01 00:36:10.613 [INFO][5507] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:36:10.625240 containerd[1468]: 2025-11-01 00:36:10.613 [INFO][5507] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:36:10.625240 containerd[1468]: 2025-11-01 00:36:10.618 [WARNING][5507] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" HandleID="k8s-pod-network.fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Workload="localhost-k8s-whisker--59bc9c756c--z94hk-eth0" Nov 1 00:36:10.625240 containerd[1468]: 2025-11-01 00:36:10.618 [INFO][5507] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" HandleID="k8s-pod-network.fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Workload="localhost-k8s-whisker--59bc9c756c--z94hk-eth0" Nov 1 00:36:10.625240 containerd[1468]: 2025-11-01 00:36:10.619 [INFO][5507] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:36:10.625240 containerd[1468]: 2025-11-01 00:36:10.622 [INFO][5498] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Nov 1 00:36:10.625616 containerd[1468]: time="2025-11-01T00:36:10.625285250Z" level=info msg="TearDown network for sandbox \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\" successfully" Nov 1 00:36:10.625616 containerd[1468]: time="2025-11-01T00:36:10.625312422Z" level=info msg="StopPodSandbox for \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\" returns successfully" Nov 1 00:36:10.625888 containerd[1468]: time="2025-11-01T00:36:10.625846311Z" level=info msg="RemovePodSandbox for \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\"" Nov 1 00:36:10.625937 containerd[1468]: time="2025-11-01T00:36:10.625894573Z" level=info msg="Forcibly stopping sandbox \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\"" Nov 1 00:36:10.688159 containerd[1468]: 2025-11-01 00:36:10.658 [WARNING][5525] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" WorkloadEndpoint="localhost-k8s-whisker--59bc9c756c--z94hk-eth0" Nov 1 00:36:10.688159 containerd[1468]: 2025-11-01 00:36:10.658 [INFO][5525] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Nov 1 00:36:10.688159 containerd[1468]: 2025-11-01 00:36:10.658 [INFO][5525] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" iface="eth0" netns="" Nov 1 00:36:10.688159 containerd[1468]: 2025-11-01 00:36:10.658 [INFO][5525] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Nov 1 00:36:10.688159 containerd[1468]: 2025-11-01 00:36:10.658 [INFO][5525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Nov 1 00:36:10.688159 containerd[1468]: 2025-11-01 00:36:10.676 [INFO][5534] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" HandleID="k8s-pod-network.fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Workload="localhost-k8s-whisker--59bc9c756c--z94hk-eth0" Nov 1 00:36:10.688159 containerd[1468]: 2025-11-01 00:36:10.677 [INFO][5534] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:36:10.688159 containerd[1468]: 2025-11-01 00:36:10.677 [INFO][5534] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:36:10.688159 containerd[1468]: 2025-11-01 00:36:10.681 [WARNING][5534] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" HandleID="k8s-pod-network.fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Workload="localhost-k8s-whisker--59bc9c756c--z94hk-eth0" Nov 1 00:36:10.688159 containerd[1468]: 2025-11-01 00:36:10.681 [INFO][5534] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" HandleID="k8s-pod-network.fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Workload="localhost-k8s-whisker--59bc9c756c--z94hk-eth0" Nov 1 00:36:10.688159 containerd[1468]: 2025-11-01 00:36:10.683 [INFO][5534] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:36:10.688159 containerd[1468]: 2025-11-01 00:36:10.685 [INFO][5525] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20" Nov 1 00:36:10.688488 containerd[1468]: time="2025-11-01T00:36:10.688212040Z" level=info msg="TearDown network for sandbox \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\" successfully" Nov 1 00:36:10.692046 containerd[1468]: time="2025-11-01T00:36:10.692019105Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:36:10.692103 containerd[1468]: time="2025-11-01T00:36:10.692057278Z" level=info msg="RemovePodSandbox \"fc269b9a1021e60b3e25d467aa3ec26bd0b905dfdff8d5b5e2870f8171d04b20\" returns successfully" Nov 1 00:36:10.692689 containerd[1468]: time="2025-11-01T00:36:10.692655239Z" level=info msg="StopPodSandbox for \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\"" Nov 1 00:36:10.756894 containerd[1468]: 2025-11-01 00:36:10.724 [WARNING][5551] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jzfns-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31c28b53-e76c-45d5-b66c-cb1d82d504b6", ResourceVersion:"1228", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59", Pod:"csi-node-driver-jzfns", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali10bab976708", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:36:10.756894 containerd[1468]: 2025-11-01 00:36:10.725 [INFO][5551] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Nov 1 00:36:10.756894 containerd[1468]: 2025-11-01 00:36:10.725 [INFO][5551] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" iface="eth0" netns="" Nov 1 00:36:10.756894 containerd[1468]: 2025-11-01 00:36:10.725 [INFO][5551] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Nov 1 00:36:10.756894 containerd[1468]: 2025-11-01 00:36:10.725 [INFO][5551] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Nov 1 00:36:10.756894 containerd[1468]: 2025-11-01 00:36:10.744 [INFO][5560] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" HandleID="k8s-pod-network.ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Workload="localhost-k8s-csi--node--driver--jzfns-eth0" Nov 1 00:36:10.756894 containerd[1468]: 2025-11-01 00:36:10.744 [INFO][5560] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:36:10.756894 containerd[1468]: 2025-11-01 00:36:10.744 [INFO][5560] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:36:10.756894 containerd[1468]: 2025-11-01 00:36:10.750 [WARNING][5560] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" HandleID="k8s-pod-network.ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Workload="localhost-k8s-csi--node--driver--jzfns-eth0" Nov 1 00:36:10.756894 containerd[1468]: 2025-11-01 00:36:10.750 [INFO][5560] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" HandleID="k8s-pod-network.ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Workload="localhost-k8s-csi--node--driver--jzfns-eth0" Nov 1 00:36:10.756894 containerd[1468]: 2025-11-01 00:36:10.751 [INFO][5560] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:36:10.756894 containerd[1468]: 2025-11-01 00:36:10.754 [INFO][5551] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Nov 1 00:36:10.757279 containerd[1468]: time="2025-11-01T00:36:10.756940923Z" level=info msg="TearDown network for sandbox \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\" successfully" Nov 1 00:36:10.757279 containerd[1468]: time="2025-11-01T00:36:10.756969227Z" level=info msg="StopPodSandbox for \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\" returns successfully" Nov 1 00:36:10.757478 containerd[1468]: time="2025-11-01T00:36:10.757446067Z" level=info msg="RemovePodSandbox for \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\"" Nov 1 00:36:10.757520 containerd[1468]: time="2025-11-01T00:36:10.757479651Z" level=info msg="Forcibly stopping sandbox \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\"" Nov 1 00:36:10.816883 containerd[1468]: 2025-11-01 00:36:10.787 [WARNING][5577] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jzfns-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31c28b53-e76c-45d5-b66c-cb1d82d504b6", ResourceVersion:"1228", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"156771fb760a9fbb207f828bde177eb19d9c50beb004ab016fdf022a66a5ed59", Pod:"csi-node-driver-jzfns", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali10bab976708", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:36:10.816883 containerd[1468]: 2025-11-01 00:36:10.788 [INFO][5577] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Nov 1 00:36:10.816883 containerd[1468]: 2025-11-01 00:36:10.788 [INFO][5577] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" iface="eth0" netns="" Nov 1 00:36:10.816883 containerd[1468]: 2025-11-01 00:36:10.788 [INFO][5577] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Nov 1 00:36:10.816883 containerd[1468]: 2025-11-01 00:36:10.788 [INFO][5577] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Nov 1 00:36:10.816883 containerd[1468]: 2025-11-01 00:36:10.805 [INFO][5586] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" HandleID="k8s-pod-network.ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Workload="localhost-k8s-csi--node--driver--jzfns-eth0" Nov 1 00:36:10.816883 containerd[1468]: 2025-11-01 00:36:10.805 [INFO][5586] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:36:10.816883 containerd[1468]: 2025-11-01 00:36:10.806 [INFO][5586] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:36:10.816883 containerd[1468]: 2025-11-01 00:36:10.810 [WARNING][5586] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" HandleID="k8s-pod-network.ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Workload="localhost-k8s-csi--node--driver--jzfns-eth0" Nov 1 00:36:10.816883 containerd[1468]: 2025-11-01 00:36:10.810 [INFO][5586] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" HandleID="k8s-pod-network.ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Workload="localhost-k8s-csi--node--driver--jzfns-eth0" Nov 1 00:36:10.816883 containerd[1468]: 2025-11-01 00:36:10.811 [INFO][5586] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:36:10.816883 containerd[1468]: 2025-11-01 00:36:10.814 [INFO][5577] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db" Nov 1 00:36:10.816883 containerd[1468]: time="2025-11-01T00:36:10.816844895Z" level=info msg="TearDown network for sandbox \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\" successfully" Nov 1 00:36:10.821001 containerd[1468]: time="2025-11-01T00:36:10.820972741Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:36:10.821047 containerd[1468]: time="2025-11-01T00:36:10.821021945Z" level=info msg="RemovePodSandbox \"ed901c2abdcf29ec954f2f120f963106a615b6ad9f82d024e30307e10d74e7db\" returns successfully" Nov 1 00:36:10.821515 containerd[1468]: time="2025-11-01T00:36:10.821488907Z" level=info msg="StopPodSandbox for \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\"" Nov 1 00:36:10.880860 containerd[1468]: 2025-11-01 00:36:10.851 [WARNING][5605] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0", GenerateName:"calico-kube-controllers-64f94746cd-", Namespace:"calico-system", SelfLink:"", UID:"4ca70b04-3681-42b1-b3b8-746e67038cfe", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64f94746cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0", Pod:"calico-kube-controllers-64f94746cd-5r8bx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali23728ae47bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:36:10.880860 containerd[1468]: 2025-11-01 00:36:10.851 [INFO][5605] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Nov 1 00:36:10.880860 containerd[1468]: 2025-11-01 00:36:10.851 [INFO][5605] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" iface="eth0" netns="" Nov 1 00:36:10.880860 containerd[1468]: 2025-11-01 00:36:10.851 [INFO][5605] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Nov 1 00:36:10.880860 containerd[1468]: 2025-11-01 00:36:10.851 [INFO][5605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Nov 1 00:36:10.880860 containerd[1468]: 2025-11-01 00:36:10.869 [INFO][5613] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" HandleID="k8s-pod-network.e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Workload="localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0" Nov 1 00:36:10.880860 containerd[1468]: 2025-11-01 00:36:10.869 [INFO][5613] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:36:10.880860 containerd[1468]: 2025-11-01 00:36:10.869 [INFO][5613] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:36:10.880860 containerd[1468]: 2025-11-01 00:36:10.875 [WARNING][5613] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" HandleID="k8s-pod-network.e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Workload="localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0" Nov 1 00:36:10.880860 containerd[1468]: 2025-11-01 00:36:10.875 [INFO][5613] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" HandleID="k8s-pod-network.e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Workload="localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0" Nov 1 00:36:10.880860 containerd[1468]: 2025-11-01 00:36:10.876 [INFO][5613] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:36:10.880860 containerd[1468]: 2025-11-01 00:36:10.878 [INFO][5605] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Nov 1 00:36:10.881255 containerd[1468]: time="2025-11-01T00:36:10.880904687Z" level=info msg="TearDown network for sandbox \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\" successfully" Nov 1 00:36:10.881255 containerd[1468]: time="2025-11-01T00:36:10.880930136Z" level=info msg="StopPodSandbox for \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\" returns successfully" Nov 1 00:36:10.881513 containerd[1468]: time="2025-11-01T00:36:10.881476147Z" level=info msg="RemovePodSandbox for \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\"" Nov 1 00:36:10.881569 containerd[1468]: time="2025-11-01T00:36:10.881520773Z" level=info msg="Forcibly stopping sandbox \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\"" Nov 1 00:36:10.947484 containerd[1468]: 2025-11-01 00:36:10.914 [WARNING][5630] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0", GenerateName:"calico-kube-controllers-64f94746cd-", Namespace:"calico-system", SelfLink:"", UID:"4ca70b04-3681-42b1-b3b8-746e67038cfe", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64f94746cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86377e5195036bb6a0f22e054ecc25f4c7ac31a4ee9cc93add56c888f9e3f2d0", Pod:"calico-kube-controllers-64f94746cd-5r8bx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali23728ae47bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:36:10.947484 containerd[1468]: 2025-11-01 00:36:10.914 [INFO][5630] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Nov 1 00:36:10.947484 containerd[1468]: 2025-11-01 00:36:10.914 [INFO][5630] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" iface="eth0" netns="" Nov 1 00:36:10.947484 containerd[1468]: 2025-11-01 00:36:10.914 [INFO][5630] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Nov 1 00:36:10.947484 containerd[1468]: 2025-11-01 00:36:10.914 [INFO][5630] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Nov 1 00:36:10.947484 containerd[1468]: 2025-11-01 00:36:10.936 [INFO][5639] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" HandleID="k8s-pod-network.e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Workload="localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0" Nov 1 00:36:10.947484 containerd[1468]: 2025-11-01 00:36:10.936 [INFO][5639] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:36:10.947484 containerd[1468]: 2025-11-01 00:36:10.936 [INFO][5639] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:36:10.947484 containerd[1468]: 2025-11-01 00:36:10.941 [WARNING][5639] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" HandleID="k8s-pod-network.e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Workload="localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0" Nov 1 00:36:10.947484 containerd[1468]: 2025-11-01 00:36:10.941 [INFO][5639] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" HandleID="k8s-pod-network.e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Workload="localhost-k8s-calico--kube--controllers--64f94746cd--5r8bx-eth0" Nov 1 00:36:10.947484 containerd[1468]: 2025-11-01 00:36:10.942 [INFO][5639] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:36:10.947484 containerd[1468]: 2025-11-01 00:36:10.944 [INFO][5630] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a" Nov 1 00:36:10.947927 containerd[1468]: time="2025-11-01T00:36:10.947498635Z" level=info msg="TearDown network for sandbox \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\" successfully" Nov 1 00:36:10.951846 containerd[1468]: time="2025-11-01T00:36:10.951816475Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:36:10.951911 containerd[1468]: time="2025-11-01T00:36:10.951865177Z" level=info msg="RemovePodSandbox \"e101f67d76e6a6c1a47dc764e78ee017adf15cda89b3b9f75a24e30d3768260a\" returns successfully" Nov 1 00:36:10.952345 containerd[1468]: time="2025-11-01T00:36:10.952320387Z" level=info msg="StopPodSandbox for \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\"" Nov 1 00:36:11.018450 containerd[1468]: 2025-11-01 00:36:10.984 [WARNING][5656] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--rlz6p-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"331a1960-88ad-4608-9f70-708ee400d030", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a", Pod:"goldmane-666569f655-rlz6p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibb72cf79e7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:36:11.018450 containerd[1468]: 2025-11-01 00:36:10.985 [INFO][5656] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Nov 1 00:36:11.018450 containerd[1468]: 2025-11-01 00:36:10.985 [INFO][5656] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" iface="eth0" netns="" Nov 1 00:36:11.018450 containerd[1468]: 2025-11-01 00:36:10.985 [INFO][5656] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Nov 1 00:36:11.018450 containerd[1468]: 2025-11-01 00:36:10.985 [INFO][5656] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Nov 1 00:36:11.018450 containerd[1468]: 2025-11-01 00:36:11.005 [INFO][5664] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" HandleID="k8s-pod-network.2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Workload="localhost-k8s-goldmane--666569f655--rlz6p-eth0" Nov 1 00:36:11.018450 containerd[1468]: 2025-11-01 00:36:11.005 [INFO][5664] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:36:11.018450 containerd[1468]: 2025-11-01 00:36:11.005 [INFO][5664] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:36:11.018450 containerd[1468]: 2025-11-01 00:36:11.011 [WARNING][5664] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" HandleID="k8s-pod-network.2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Workload="localhost-k8s-goldmane--666569f655--rlz6p-eth0" Nov 1 00:36:11.018450 containerd[1468]: 2025-11-01 00:36:11.011 [INFO][5664] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" HandleID="k8s-pod-network.2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Workload="localhost-k8s-goldmane--666569f655--rlz6p-eth0" Nov 1 00:36:11.018450 containerd[1468]: 2025-11-01 00:36:11.012 [INFO][5664] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:36:11.018450 containerd[1468]: 2025-11-01 00:36:11.015 [INFO][5656] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Nov 1 00:36:11.019248 containerd[1468]: time="2025-11-01T00:36:11.018494268Z" level=info msg="TearDown network for sandbox \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\" successfully" Nov 1 00:36:11.019248 containerd[1468]: time="2025-11-01T00:36:11.018521590Z" level=info msg="StopPodSandbox for \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\" returns successfully" Nov 1 00:36:11.019248 containerd[1468]: time="2025-11-01T00:36:11.019068283Z" level=info msg="RemovePodSandbox for \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\"" Nov 1 00:36:11.019248 containerd[1468]: time="2025-11-01T00:36:11.019111656Z" level=info msg="Forcibly stopping sandbox \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\"" Nov 1 00:36:11.086035 containerd[1468]: 2025-11-01 00:36:11.052 [WARNING][5682] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--rlz6p-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"331a1960-88ad-4608-9f70-708ee400d030", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc9d4152c71e1f3573abc9705411741bf872567240380ca2bea49614a6af631a", Pod:"goldmane-666569f655-rlz6p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibb72cf79e7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:36:11.086035 containerd[1468]: 2025-11-01 00:36:11.053 [INFO][5682] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Nov 1 00:36:11.086035 containerd[1468]: 2025-11-01 00:36:11.053 [INFO][5682] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" iface="eth0" netns="" Nov 1 00:36:11.086035 containerd[1468]: 2025-11-01 00:36:11.053 [INFO][5682] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Nov 1 00:36:11.086035 containerd[1468]: 2025-11-01 00:36:11.053 [INFO][5682] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Nov 1 00:36:11.086035 containerd[1468]: 2025-11-01 00:36:11.074 [INFO][5690] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" HandleID="k8s-pod-network.2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Workload="localhost-k8s-goldmane--666569f655--rlz6p-eth0" Nov 1 00:36:11.086035 containerd[1468]: 2025-11-01 00:36:11.074 [INFO][5690] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:36:11.086035 containerd[1468]: 2025-11-01 00:36:11.074 [INFO][5690] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:36:11.086035 containerd[1468]: 2025-11-01 00:36:11.079 [WARNING][5690] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" HandleID="k8s-pod-network.2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Workload="localhost-k8s-goldmane--666569f655--rlz6p-eth0" Nov 1 00:36:11.086035 containerd[1468]: 2025-11-01 00:36:11.079 [INFO][5690] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" HandleID="k8s-pod-network.2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Workload="localhost-k8s-goldmane--666569f655--rlz6p-eth0" Nov 1 00:36:11.086035 containerd[1468]: 2025-11-01 00:36:11.081 [INFO][5690] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:36:11.086035 containerd[1468]: 2025-11-01 00:36:11.083 [INFO][5682] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588" Nov 1 00:36:11.086035 containerd[1468]: time="2025-11-01T00:36:11.085999099Z" level=info msg="TearDown network for sandbox \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\" successfully" Nov 1 00:36:11.089888 containerd[1468]: time="2025-11-01T00:36:11.089847540Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:36:11.089941 containerd[1468]: time="2025-11-01T00:36:11.089898638Z" level=info msg="RemovePodSandbox \"2e9fba4a9311695be3513a7ca7d72fbde4a0f9f9cf27e46e7afc23ad75daf588\" returns successfully" Nov 1 00:36:11.090353 containerd[1468]: time="2025-11-01T00:36:11.090326023Z" level=info msg="StopPodSandbox for \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\"" Nov 1 00:36:11.163189 containerd[1468]: 2025-11-01 00:36:11.125 [WARNING][5708] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0", GenerateName:"calico-apiserver-68fc7bb9b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"d57a8509-e37c-4d69-93aa-35fdadef5de6", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68fc7bb9b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c", Pod:"calico-apiserver-68fc7bb9b7-tvhcs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali91bba4e6801", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:36:11.163189 containerd[1468]: 2025-11-01 00:36:11.125 [INFO][5708] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Nov 1 00:36:11.163189 containerd[1468]: 2025-11-01 00:36:11.125 [INFO][5708] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" iface="eth0" netns="" Nov 1 00:36:11.163189 containerd[1468]: 2025-11-01 00:36:11.125 [INFO][5708] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Nov 1 00:36:11.163189 containerd[1468]: 2025-11-01 00:36:11.125 [INFO][5708] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Nov 1 00:36:11.163189 containerd[1468]: 2025-11-01 00:36:11.148 [INFO][5716] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" HandleID="k8s-pod-network.8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0" Nov 1 00:36:11.163189 containerd[1468]: 2025-11-01 00:36:11.149 [INFO][5716] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:36:11.163189 containerd[1468]: 2025-11-01 00:36:11.149 [INFO][5716] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:36:11.163189 containerd[1468]: 2025-11-01 00:36:11.155 [WARNING][5716] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" HandleID="k8s-pod-network.8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0" Nov 1 00:36:11.163189 containerd[1468]: 2025-11-01 00:36:11.156 [INFO][5716] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" HandleID="k8s-pod-network.8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0" Nov 1 00:36:11.163189 containerd[1468]: 2025-11-01 00:36:11.158 [INFO][5716] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:36:11.163189 containerd[1468]: 2025-11-01 00:36:11.160 [INFO][5708] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Nov 1 00:36:11.163576 containerd[1468]: time="2025-11-01T00:36:11.163242516Z" level=info msg="TearDown network for sandbox \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\" successfully" Nov 1 00:36:11.163576 containerd[1468]: time="2025-11-01T00:36:11.163273405Z" level=info msg="StopPodSandbox for \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\" returns successfully" Nov 1 00:36:11.163882 containerd[1468]: time="2025-11-01T00:36:11.163844575Z" level=info msg="RemovePodSandbox for \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\"" Nov 1 00:36:11.163915 containerd[1468]: time="2025-11-01T00:36:11.163891744Z" level=info msg="Forcibly stopping sandbox \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\"" Nov 1 00:36:11.233829 containerd[1468]: 2025-11-01 00:36:11.197 [WARNING][5733] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0", GenerateName:"calico-apiserver-68fc7bb9b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"d57a8509-e37c-4d69-93aa-35fdadef5de6", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68fc7bb9b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f1510abfe599cddda964c28855963716512df8aacb53804b390b7fd4bf510c3c", Pod:"calico-apiserver-68fc7bb9b7-tvhcs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali91bba4e6801", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:36:11.233829 containerd[1468]: 2025-11-01 00:36:11.197 [INFO][5733] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Nov 1 00:36:11.233829 containerd[1468]: 2025-11-01 00:36:11.197 [INFO][5733] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" iface="eth0" netns="" Nov 1 00:36:11.233829 containerd[1468]: 2025-11-01 00:36:11.197 [INFO][5733] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Nov 1 00:36:11.233829 containerd[1468]: 2025-11-01 00:36:11.197 [INFO][5733] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Nov 1 00:36:11.233829 containerd[1468]: 2025-11-01 00:36:11.221 [INFO][5741] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" HandleID="k8s-pod-network.8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0" Nov 1 00:36:11.233829 containerd[1468]: 2025-11-01 00:36:11.221 [INFO][5741] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:36:11.233829 containerd[1468]: 2025-11-01 00:36:11.221 [INFO][5741] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:36:11.233829 containerd[1468]: 2025-11-01 00:36:11.227 [WARNING][5741] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" HandleID="k8s-pod-network.8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0" Nov 1 00:36:11.233829 containerd[1468]: 2025-11-01 00:36:11.227 [INFO][5741] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" HandleID="k8s-pod-network.8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Workload="localhost-k8s-calico--apiserver--68fc7bb9b7--tvhcs-eth0" Nov 1 00:36:11.233829 containerd[1468]: 2025-11-01 00:36:11.228 [INFO][5741] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:36:11.233829 containerd[1468]: 2025-11-01 00:36:11.231 [INFO][5733] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885" Nov 1 00:36:11.234224 containerd[1468]: time="2025-11-01T00:36:11.233884371Z" level=info msg="TearDown network for sandbox \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\" successfully" Nov 1 00:36:11.237571 containerd[1468]: time="2025-11-01T00:36:11.237525686Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:36:11.237571 containerd[1468]: time="2025-11-01T00:36:11.237567175Z" level=info msg="RemovePodSandbox \"8d0e96dc25035166521c8231e32185cf595e9dbad06e3312ef2b449086a60885\" returns successfully" Nov 1 00:36:12.012842 kubelet[2499]: E1101 00:36:12.012650 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f6ff4bc47-cjjhn" podUID="d166a932-62b2-424c-af81-b672793d3ad2" Nov 1 00:36:14.031762 systemd[1]: Started sshd@14-10.0.0.5:22-10.0.0.1:57860.service - OpenSSH per-connection server daemon (10.0.0.1:57860). Nov 1 00:36:14.071366 sshd[5750]: Accepted publickey for core from 10.0.0.1 port 57860 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:36:14.139897 sshd[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:36:14.143962 systemd-logind[1455]: New session 15 of user core. Nov 1 00:36:14.153719 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 00:36:14.271720 sshd[5750]: pam_unix(sshd:session): session closed for user core Nov 1 00:36:14.276644 systemd[1]: sshd@14-10.0.0.5:22-10.0.0.1:57860.service: Deactivated successfully. Nov 1 00:36:14.279297 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:36:14.280022 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:36:14.281024 systemd-logind[1455]: Removed session 15. Nov 1 00:36:16.012236 kubelet[2499]: E1101 00:36:16.012179 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f94746cd-5r8bx" podUID="4ca70b04-3681-42b1-b3b8-746e67038cfe" Nov 1 00:36:16.012808 kubelet[2499]: E1101 00:36:16.012521 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-c7qgt" podUID="42b9da1b-c5f5-468c-9b0b-bd955feccb34" Nov 1 00:36:17.013366 kubelet[2499]: E1101 00:36:17.013309 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jzfns" podUID="31c28b53-e76c-45d5-b66c-cb1d82d504b6" Nov 1 00:36:18.626669 kubelet[2499]: E1101 00:36:18.626632 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:36:19.283298 systemd[1]: Started sshd@15-10.0.0.5:22-10.0.0.1:57866.service - OpenSSH per-connection server daemon (10.0.0.1:57866). Nov 1 00:36:19.319105 sshd[5798]: Accepted publickey for core from 10.0.0.1 port 57866 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:36:19.320591 sshd[5798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:36:19.324234 systemd-logind[1455]: New session 16 of user core. Nov 1 00:36:19.332714 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 00:36:19.443083 sshd[5798]: pam_unix(sshd:session): session closed for user core Nov 1 00:36:19.453267 systemd[1]: sshd@15-10.0.0.5:22-10.0.0.1:57866.service: Deactivated successfully. Nov 1 00:36:19.454753 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:36:19.456101 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:36:19.462213 systemd[1]: Started sshd@16-10.0.0.5:22-10.0.0.1:57880.service - OpenSSH per-connection server daemon (10.0.0.1:57880). Nov 1 00:36:19.463098 systemd-logind[1455]: Removed session 16. Nov 1 00:36:19.492935 sshd[5812]: Accepted publickey for core from 10.0.0.1 port 57880 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:36:19.494362 sshd[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:36:19.498238 systemd-logind[1455]: New session 17 of user core. Nov 1 00:36:19.504718 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 00:36:19.686978 sshd[5812]: pam_unix(sshd:session): session closed for user core Nov 1 00:36:19.703757 systemd[1]: sshd@16-10.0.0.5:22-10.0.0.1:57880.service: Deactivated successfully. Nov 1 00:36:19.705552 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:36:19.707014 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:36:19.708314 systemd[1]: Started sshd@17-10.0.0.5:22-10.0.0.1:57882.service - OpenSSH per-connection server daemon (10.0.0.1:57882). Nov 1 00:36:19.709573 systemd-logind[1455]: Removed session 17. Nov 1 00:36:19.746563 sshd[5824]: Accepted publickey for core from 10.0.0.1 port 57882 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:36:19.748009 sshd[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:36:19.752206 systemd-logind[1455]: New session 18 of user core. Nov 1 00:36:19.761722 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 00:36:20.012709 kubelet[2499]: E1101 00:36:20.012567 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rlz6p" podUID="331a1960-88ad-4608-9f70-708ee400d030" Nov 1 00:36:20.012709 kubelet[2499]: E1101 00:36:20.012567 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-tvhcs" podUID="d57a8509-e37c-4d69-93aa-35fdadef5de6" Nov 1 00:36:20.315152 sshd[5824]: pam_unix(sshd:session): session closed for user core Nov 1 00:36:20.329921 systemd[1]: sshd@17-10.0.0.5:22-10.0.0.1:57882.service: Deactivated successfully. Nov 1 00:36:20.332133 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:36:20.336145 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:36:20.345896 systemd[1]: Started sshd@18-10.0.0.5:22-10.0.0.1:57888.service - OpenSSH per-connection server daemon (10.0.0.1:57888). Nov 1 00:36:20.347385 systemd-logind[1455]: Removed session 18. Nov 1 00:36:20.378365 sshd[5844]: Accepted publickey for core from 10.0.0.1 port 57888 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:36:20.379929 sshd[5844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:36:20.383653 systemd-logind[1455]: New session 19 of user core. Nov 1 00:36:20.393713 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 00:36:20.616340 sshd[5844]: pam_unix(sshd:session): session closed for user core Nov 1 00:36:20.626950 systemd[1]: sshd@18-10.0.0.5:22-10.0.0.1:57888.service: Deactivated successfully. Nov 1 00:36:20.628674 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:36:20.630498 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:36:20.631918 systemd[1]: Started sshd@19-10.0.0.5:22-10.0.0.1:57892.service - OpenSSH per-connection server daemon (10.0.0.1:57892). Nov 1 00:36:20.633220 systemd-logind[1455]: Removed session 19. Nov 1 00:36:20.677163 sshd[5857]: Accepted publickey for core from 10.0.0.1 port 57892 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:36:20.678887 sshd[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:36:20.682585 systemd-logind[1455]: New session 20 of user core. Nov 1 00:36:20.686719 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 00:36:20.857281 sshd[5857]: pam_unix(sshd:session): session closed for user core Nov 1 00:36:20.861388 systemd[1]: sshd@19-10.0.0.5:22-10.0.0.1:57892.service: Deactivated successfully. Nov 1 00:36:20.863458 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:36:20.864214 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:36:20.865080 systemd-logind[1455]: Removed session 20. Nov 1 00:36:25.870330 systemd[1]: Started sshd@20-10.0.0.5:22-10.0.0.1:41444.service - OpenSSH per-connection server daemon (10.0.0.1:41444). Nov 1 00:36:25.909152 sshd[5874]: Accepted publickey for core from 10.0.0.1 port 41444 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:36:25.911470 sshd[5874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:36:25.917755 systemd-logind[1455]: New session 21 of user core. Nov 1 00:36:25.932858 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 00:36:26.014952 kubelet[2499]: E1101 00:36:26.014671 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:36:26.014952 kubelet[2499]: E1101 00:36:26.014872 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:36:26.052037 sshd[5874]: pam_unix(sshd:session): session closed for user core Nov 1 00:36:26.055307 systemd[1]: sshd@20-10.0.0.5:22-10.0.0.1:41444.service: Deactivated successfully. Nov 1 00:36:26.057312 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:36:26.058947 systemd-logind[1455]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:36:26.060550 systemd-logind[1455]: Removed session 21. Nov 1 00:36:27.019424 containerd[1468]: time="2025-11-01T00:36:27.019381139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:36:27.351272 containerd[1468]: time="2025-11-01T00:36:27.351148456Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:36:27.352332 containerd[1468]: time="2025-11-01T00:36:27.352286495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:36:27.352466 containerd[1468]: time="2025-11-01T00:36:27.352391905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:36:27.352534 kubelet[2499]: E1101 00:36:27.352494 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:36:27.352873 kubelet[2499]: E1101 00:36:27.352544 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:36:27.352873 kubelet[2499]: E1101 00:36:27.352767 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cxdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64f94746cd-5r8bx_calico-system(4ca70b04-3681-42b1-b3b8-746e67038cfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:36:27.353241 containerd[1468]: time="2025-11-01T00:36:27.353211380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:36:27.354836 kubelet[2499]: E1101 00:36:27.354728 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f94746cd-5r8bx" podUID="4ca70b04-3681-42b1-b3b8-746e67038cfe" Nov 1 00:36:27.679618 containerd[1468]: time="2025-11-01T00:36:27.679566778Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:36:27.680702 containerd[1468]: time="2025-11-01T00:36:27.680667748Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:36:27.680825 containerd[1468]: time="2025-11-01T00:36:27.680746928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:36:27.680903 kubelet[2499]: E1101 00:36:27.680858 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:36:27.680948 kubelet[2499]: E1101 00:36:27.680906 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:36:27.681035 kubelet[2499]: E1101 00:36:27.680998 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:79277034d2d74e8eb716aae70187f367,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mmc6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f6ff4bc47-cjjhn_calico-system(d166a932-62b2-424c-af81-b672793d3ad2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:36:27.684189 containerd[1468]: time="2025-11-01T00:36:27.683957869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:36:28.021007 containerd[1468]: time="2025-11-01T00:36:28.020881991Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:36:28.021978 containerd[1468]: time="2025-11-01T00:36:28.021948575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:36:28.022034 containerd[1468]: time="2025-11-01T00:36:28.022003779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:36:28.022150 kubelet[2499]: E1101 00:36:28.022113 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:36:28.022194 kubelet[2499]: E1101 00:36:28.022154 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:36:28.022289 kubelet[2499]: E1101 00:36:28.022256 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmc6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f6ff4bc47-cjjhn_calico-system(d166a932-62b2-424c-af81-b672793d3ad2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:36:28.023481 kubelet[2499]: E1101 00:36:28.023431 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f6ff4bc47-cjjhn" podUID="d166a932-62b2-424c-af81-b672793d3ad2" Nov 1 00:36:30.015484 containerd[1468]: time="2025-11-01T00:36:30.014757953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:36:30.349270 containerd[1468]: time="2025-11-01T00:36:30.349125589Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:36:30.350273 containerd[1468]: time="2025-11-01T00:36:30.350229121Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:36:30.350370 containerd[1468]: time="2025-11-01T00:36:30.350303242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:36:30.350488 kubelet[2499]: E1101 00:36:30.350434 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:36:30.350840 kubelet[2499]: E1101 00:36:30.350488 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:36:30.350840 kubelet[2499]: E1101 00:36:30.350637 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4vct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68fc7bb9b7-c7qgt_calico-apiserver(42b9da1b-c5f5-468c-9b0b-bd955feccb34): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:36:30.352647 kubelet[2499]: E1101 00:36:30.352609 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-c7qgt" podUID="42b9da1b-c5f5-468c-9b0b-bd955feccb34" Nov 1 00:36:31.013054 containerd[1468]: time="2025-11-01T00:36:31.013011051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:36:31.064672 systemd[1]: Started sshd@21-10.0.0.5:22-10.0.0.1:41446.service - OpenSSH per-connection server daemon (10.0.0.1:41446). Nov 1 00:36:31.103861 sshd[5888]: Accepted publickey for core from 10.0.0.1 port 41446 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:36:31.105331 sshd[5888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:36:31.109352 systemd-logind[1455]: New session 22 of user core. Nov 1 00:36:31.114725 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 00:36:31.226663 sshd[5888]: pam_unix(sshd:session): session closed for user core Nov 1 00:36:31.230840 systemd[1]: sshd@21-10.0.0.5:22-10.0.0.1:41446.service: Deactivated successfully. Nov 1 00:36:31.232847 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:36:31.233440 systemd-logind[1455]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:36:31.234294 systemd-logind[1455]: Removed session 22. Nov 1 00:36:31.320986 containerd[1468]: time="2025-11-01T00:36:31.320896525Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:36:31.322302 containerd[1468]: time="2025-11-01T00:36:31.322243188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:36:31.322350 containerd[1468]: time="2025-11-01T00:36:31.322312099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:36:31.322478 kubelet[2499]: E1101 00:36:31.322433 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:36:31.322574 kubelet[2499]: E1101 00:36:31.322483 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:36:31.322669 kubelet[2499]: E1101 00:36:31.322623 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-djwph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rlz6p_calico-system(331a1960-88ad-4608-9f70-708ee400d030): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:36:31.324469 kubelet[2499]: E1101 00:36:31.324428 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rlz6p" podUID="331a1960-88ad-4608-9f70-708ee400d030" Nov 1 00:36:32.014355 containerd[1468]: time="2025-11-01T00:36:32.012753770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:36:32.333994 containerd[1468]: time="2025-11-01T00:36:32.333727377Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:36:32.336723 containerd[1468]: time="2025-11-01T00:36:32.334781596Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:36:32.336723 containerd[1468]: time="2025-11-01T00:36:32.334847701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:36:32.337795 kubelet[2499]: E1101 00:36:32.337749 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:36:32.338052 kubelet[2499]: E1101 00:36:32.337803 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:36:32.338052 kubelet[2499]: E1101 00:36:32.337907 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-278m6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jzfns_calico-system(31c28b53-e76c-45d5-b66c-cb1d82d504b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:36:32.340087 containerd[1468]: time="2025-11-01T00:36:32.340052965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:36:32.640657 containerd[1468]: time="2025-11-01T00:36:32.640510443Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:36:32.641629 containerd[1468]: time="2025-11-01T00:36:32.641577225Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:36:32.641724 containerd[1468]: time="2025-11-01T00:36:32.641646055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:36:32.641847 kubelet[2499]: E1101 00:36:32.641789 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:36:32.641899 kubelet[2499]: E1101 00:36:32.641846 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:36:32.641998 kubelet[2499]: E1101 00:36:32.641959 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-278m6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jzfns_calico-system(31c28b53-e76c-45d5-b66c-cb1d82d504b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:36:32.643461 kubelet[2499]: E1101 00:36:32.643415 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jzfns" podUID="31c28b53-e76c-45d5-b66c-cb1d82d504b6" Nov 1 00:36:35.012736 containerd[1468]: time="2025-11-01T00:36:35.012686549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:36:35.333151 containerd[1468]: time="2025-11-01T00:36:35.333008983Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:36:35.334293 containerd[1468]: time="2025-11-01T00:36:35.334256295Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:36:35.334403 containerd[1468]: time="2025-11-01T00:36:35.334293426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:36:35.334531 kubelet[2499]: E1101 00:36:35.334477 2499 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:36:35.335039 kubelet[2499]: E1101 00:36:35.334529 2499 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:36:35.335039 kubelet[2499]: E1101 00:36:35.334693 2499 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jffx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68fc7bb9b7-tvhcs_calico-apiserver(d57a8509-e37c-4d69-93aa-35fdadef5de6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:36:35.336366 kubelet[2499]: E1101 00:36:35.336039 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-tvhcs" podUID="d57a8509-e37c-4d69-93aa-35fdadef5de6" Nov 1 00:36:36.238759 systemd[1]: Started sshd@22-10.0.0.5:22-10.0.0.1:54746.service - OpenSSH per-connection server daemon (10.0.0.1:54746). Nov 1 00:36:36.280244 sshd[5902]: Accepted publickey for core from 10.0.0.1 port 54746 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:36:36.281983 sshd[5902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:36:36.285868 systemd-logind[1455]: New session 23 of user core. Nov 1 00:36:36.298740 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 00:36:36.419076 sshd[5902]: pam_unix(sshd:session): session closed for user core Nov 1 00:36:36.425568 systemd[1]: sshd@22-10.0.0.5:22-10.0.0.1:54746.service: Deactivated successfully. Nov 1 00:36:36.427853 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:36:36.428503 systemd-logind[1455]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:36:36.429515 systemd-logind[1455]: Removed session 23. Nov 1 00:36:41.012220 kubelet[2499]: E1101 00:36:41.012171 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:36:41.013011 kubelet[2499]: E1101 00:36:41.012952 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f94746cd-5r8bx" podUID="4ca70b04-3681-42b1-b3b8-746e67038cfe" Nov 1 00:36:41.014950 kubelet[2499]: E1101 00:36:41.014911 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f6ff4bc47-cjjhn" podUID="d166a932-62b2-424c-af81-b672793d3ad2" Nov 1 00:36:41.434314 systemd[1]: Started sshd@23-10.0.0.5:22-10.0.0.1:54748.service - OpenSSH per-connection server daemon (10.0.0.1:54748). Nov 1 00:36:41.496064 sshd[5926]: Accepted publickey for core from 10.0.0.1 port 54748 ssh2: RSA SHA256:PQwvVl4RxbpCWc+PbXgcFgibqa0JVuB6h11LHT1RbI8 Nov 1 00:36:41.497819 sshd[5926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:36:41.502150 systemd-logind[1455]: New session 24 of user core. Nov 1 00:36:41.506784 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 00:36:41.620174 sshd[5926]: pam_unix(sshd:session): session closed for user core Nov 1 00:36:41.624204 systemd[1]: sshd@23-10.0.0.5:22-10.0.0.1:54748.service: Deactivated successfully. Nov 1 00:36:41.626192 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:36:41.626906 systemd-logind[1455]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:36:41.627752 systemd-logind[1455]: Removed session 24. Nov 1 00:36:42.012800 kubelet[2499]: E1101 00:36:42.012747 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68fc7bb9b7-c7qgt" podUID="42b9da1b-c5f5-468c-9b0b-bd955feccb34"