Nov 8 00:20:21.956815 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:20:21.956839 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:20:21.956850 kernel: BIOS-provided physical RAM map: Nov 8 00:20:21.956857 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 8 00:20:21.956863 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 8 00:20:21.956869 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 8 00:20:21.956876 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 8 00:20:21.956883 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 8 00:20:21.956889 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Nov 8 00:20:21.956895 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Nov 8 00:20:21.956904 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Nov 8 00:20:21.956910 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Nov 8 00:20:21.956916 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Nov 8 00:20:21.956923 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Nov 8 00:20:21.956930 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Nov 8 00:20:21.956937 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 8 00:20:21.956947 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Nov 8 00:20:21.956953 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Nov 8 00:20:21.956960 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 8 00:20:21.956967 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 8 00:20:21.956973 kernel: NX (Execute Disable) protection: active Nov 8 00:20:21.956980 kernel: APIC: Static calls initialized Nov 8 00:20:21.956986 kernel: efi: EFI v2.7 by EDK II Nov 8 00:20:21.956993 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Nov 8 00:20:21.957000 kernel: SMBIOS 2.8 present. Nov 8 00:20:21.957006 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Nov 8 00:20:21.957013 kernel: Hypervisor detected: KVM Nov 8 00:20:21.957022 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:20:21.957029 kernel: kvm-clock: using sched offset of 5280452653 cycles Nov 8 00:20:21.957036 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:20:21.957043 kernel: tsc: Detected 2794.750 MHz processor Nov 8 00:20:21.957050 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:20:21.957057 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:20:21.957064 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Nov 8 00:20:21.957070 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 8 00:20:21.957077 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:20:21.957087 kernel: Using GB pages for direct mapping Nov 8 00:20:21.957094 kernel: Secure boot disabled Nov 8 00:20:21.957101 kernel: ACPI: Early table checksum verification disabled Nov 8 00:20:21.957108 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 8 00:20:21.957119 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 8 00:20:21.957126 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:20:21.957133 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:20:21.957143 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 8 00:20:21.957150 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:20:21.957157 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:20:21.957164 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:20:21.957171 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:20:21.957178 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 8 00:20:21.957185 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 8 00:20:21.957195 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Nov 8 00:20:21.957202 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 8 00:20:21.957209 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 8 00:20:21.957217 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 8 00:20:21.957224 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 8 00:20:21.957231 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 8 00:20:21.957238 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 8 00:20:21.957245 kernel: No NUMA configuration found Nov 8 00:20:21.957252 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Nov 8 00:20:21.957262 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Nov 8 00:20:21.957269 kernel: Zone ranges: Nov 8 00:20:21.957276 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:20:21.957283 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Nov 8 00:20:21.957290 kernel: Normal empty Nov 8 00:20:21.957297 kernel: Movable zone start for each node Nov 8 00:20:21.957304 kernel: Early memory node ranges Nov 8 00:20:21.957311 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 8 00:20:21.957318 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 8 00:20:21.957326 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 8 00:20:21.957335 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Nov 8 00:20:21.957342 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Nov 8 00:20:21.957349 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Nov 8 00:20:21.957357 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Nov 8 00:20:21.957364 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:20:21.957371 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 8 00:20:21.957390 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 8 00:20:21.957397 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:20:21.957404 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Nov 8 00:20:21.957416 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 8 00:20:21.957423 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Nov 8 00:20:21.957431 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:20:21.957438 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:20:21.957445 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:20:21.957453 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:20:21.957460 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:20:21.957467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:20:21.957475 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:20:21.957485 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:20:21.957492 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:20:21.957500 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:20:21.957507 kernel: TSC deadline timer available Nov 8 00:20:21.957514 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 8 00:20:21.957522 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:20:21.957529 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 8 00:20:21.957536 kernel: kvm-guest: setup PV sched yield Nov 8 00:20:21.957544 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 8 00:20:21.957554 kernel: Booting paravirtualized kernel on KVM Nov 8 00:20:21.957562 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:20:21.957569 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 8 00:20:21.957590 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Nov 8 00:20:21.957601 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Nov 8 00:20:21.957610 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 8 00:20:21.957618 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:20:21.957625 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:20:21.957633 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:20:21.957644 kernel: random: crng init done Nov 8 00:20:21.957652 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:20:21.957659 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:20:21.957666 kernel: Fallback order for Node 0: 0 Nov 8 00:20:21.957673 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Nov 8 00:20:21.957680 kernel: Policy zone: DMA32 Nov 8 00:20:21.957687 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:20:21.957695 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 166140K reserved, 0K cma-reserved) Nov 8 00:20:21.957702 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 8 00:20:21.957713 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:20:21.957720 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:20:21.957727 kernel: Dynamic Preempt: voluntary Nov 8 00:20:21.957734 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:20:21.957756 kernel: rcu: RCU event tracing is enabled. Nov 8 00:20:21.957766 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 8 00:20:21.957774 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:20:21.957782 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:20:21.957789 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:20:21.957796 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:20:21.957804 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 8 00:20:21.957811 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 8 00:20:21.957821 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:20:21.957829 kernel: Console: colour dummy device 80x25 Nov 8 00:20:21.957836 kernel: printk: console [ttyS0] enabled Nov 8 00:20:21.957844 kernel: ACPI: Core revision 20230628 Nov 8 00:20:21.957851 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:20:21.957861 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:20:21.957869 kernel: x2apic enabled Nov 8 00:20:21.957876 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:20:21.957884 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 8 00:20:21.957891 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 8 00:20:21.957899 kernel: kvm-guest: setup PV IPIs Nov 8 00:20:21.957906 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:20:21.957913 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 8 00:20:21.957921 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Nov 8 00:20:21.957931 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 8 00:20:21.957939 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 8 00:20:21.957946 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 8 00:20:21.957954 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:20:21.957961 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:20:21.957968 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:20:21.957976 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 8 00:20:21.957983 kernel: active return thunk: retbleed_return_thunk Nov 8 00:20:21.957991 kernel: RETBleed: Mitigation: untrained return thunk Nov 8 00:20:21.958001 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:20:21.958009 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:20:21.958016 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 8 00:20:21.958024 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 8 00:20:21.958032 kernel: active return thunk: srso_return_thunk Nov 8 00:20:21.958039 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 8 00:20:21.958047 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:20:21.958054 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:20:21.958064 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:20:21.958072 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:20:21.958079 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 8 00:20:21.958087 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:20:21.958094 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:20:21.958102 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:20:21.958109 kernel: landlock: Up and running. Nov 8 00:20:21.958116 kernel: SELinux: Initializing. Nov 8 00:20:21.958124 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:20:21.958134 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:20:21.958142 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 8 00:20:21.958149 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:20:21.958157 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:20:21.958164 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:20:21.958172 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 8 00:20:21.958179 kernel: ... version: 0 Nov 8 00:20:21.958186 kernel: ... bit width: 48 Nov 8 00:20:21.958194 kernel: ... generic registers: 6 Nov 8 00:20:21.958204 kernel: ... value mask: 0000ffffffffffff Nov 8 00:20:21.958211 kernel: ... max period: 00007fffffffffff Nov 8 00:20:21.958219 kernel: ... fixed-purpose events: 0 Nov 8 00:20:21.958226 kernel: ... event mask: 000000000000003f Nov 8 00:20:21.958233 kernel: signal: max sigframe size: 1776 Nov 8 00:20:21.958241 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:20:21.958248 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:20:21.958256 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:20:21.958263 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:20:21.958273 kernel: .... node #0, CPUs: #1 #2 #3 Nov 8 00:20:21.958281 kernel: smp: Brought up 1 node, 4 CPUs Nov 8 00:20:21.958288 kernel: smpboot: Max logical packages: 1 Nov 8 00:20:21.958296 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Nov 8 00:20:21.958303 kernel: devtmpfs: initialized Nov 8 00:20:21.958310 kernel: x86/mm: Memory block size: 128MB Nov 8 00:20:21.958318 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 8 00:20:21.958326 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 8 00:20:21.958333 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Nov 8 00:20:21.958344 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 8 00:20:21.958351 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 8 00:20:21.958359 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:20:21.958366 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 8 00:20:21.958374 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:20:21.958391 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:20:21.958399 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:20:21.958407 kernel: audit: type=2000 audit(1762561219.928:1): state=initialized audit_enabled=0 res=1 Nov 8 00:20:21.958414 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:20:21.958425 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:20:21.958432 kernel: cpuidle: using governor menu Nov 8 00:20:21.958439 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:20:21.958447 kernel: dca service started, version 1.12.1 Nov 8 00:20:21.958454 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 8 00:20:21.958462 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 8 00:20:21.958470 kernel: PCI: Using configuration type 1 for base access Nov 8 00:20:21.958477 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:20:21.958485 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:20:21.958495 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:20:21.958503 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:20:21.958510 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:20:21.958518 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:20:21.958525 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:20:21.958532 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:20:21.958540 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:20:21.958547 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:20:21.958555 kernel: ACPI: Interpreter enabled Nov 8 00:20:21.958565 kernel: ACPI: PM: (supports S0 S3 S5) Nov 8 00:20:21.958572 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:20:21.958594 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:20:21.958605 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:20:21.958615 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 8 00:20:21.958624 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:20:21.958857 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:20:21.958988 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 8 00:20:21.959122 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 8 00:20:21.959132 kernel: PCI host bridge to bus 0000:00 Nov 8 00:20:21.959262 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:20:21.959372 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:20:21.959493 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:20:21.959660 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 8 00:20:21.959815 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:20:21.959961 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Nov 8 00:20:21.960107 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:20:21.960281 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 8 00:20:21.960457 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 8 00:20:21.960629 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Nov 8 00:20:21.960787 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Nov 8 00:20:21.960952 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 8 00:20:21.961111 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Nov 8 00:20:21.961298 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:20:21.961520 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 8 00:20:21.961720 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Nov 8 00:20:21.961886 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Nov 8 00:20:21.962048 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Nov 8 00:20:21.962240 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 8 00:20:21.962414 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Nov 8 00:20:21.962572 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Nov 8 00:20:21.962750 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Nov 8 00:20:21.962950 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:20:21.963118 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Nov 8 00:20:21.963285 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Nov 8 00:20:21.963454 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Nov 8 00:20:21.963628 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Nov 8 00:20:21.963820 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 8 00:20:21.963978 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 8 00:20:21.964159 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 8 00:20:21.964317 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Nov 8 00:20:21.964490 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Nov 8 00:20:21.964699 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 8 00:20:21.964863 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Nov 8 00:20:21.964880 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:20:21.964892 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:20:21.964903 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:20:21.964914 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:20:21.964925 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 8 00:20:21.964941 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 8 00:20:21.964952 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 8 00:20:21.964963 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 8 00:20:21.964973 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 8 00:20:21.964984 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 8 00:20:21.964995 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 8 00:20:21.965006 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 8 00:20:21.965016 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 8 00:20:21.965027 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 8 00:20:21.965042 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 8 00:20:21.965053 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 8 00:20:21.965064 kernel: iommu: Default domain type: Translated Nov 8 00:20:21.965075 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:20:21.965086 kernel: efivars: Registered efivars operations Nov 8 00:20:21.965099 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:20:21.965111 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:20:21.965122 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 8 00:20:21.965133 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Nov 8 00:20:21.965148 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Nov 8 00:20:21.965158 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Nov 8 00:20:21.965319 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 8 00:20:21.965488 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 8 00:20:21.965725 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:20:21.965741 kernel: vgaarb: loaded Nov 8 00:20:21.965752 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:20:21.965762 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:20:21.965773 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:20:21.965788 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:20:21.965798 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:20:21.965808 kernel: pnp: PnP ACPI init Nov 8 00:20:21.965981 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 8 00:20:21.965997 kernel: pnp: PnP ACPI: found 6 devices Nov 8 00:20:21.966007 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:20:21.966018 kernel: NET: Registered PF_INET protocol family Nov 8 00:20:21.966028 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:20:21.966043 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:20:21.966053 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:20:21.966064 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:20:21.966074 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:20:21.966084 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:20:21.966094 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:20:21.966105 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:20:21.966115 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:20:21.966126 kernel: NET: Registered PF_XDP protocol family Nov 8 00:20:21.966282 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Nov 8 00:20:21.966436 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Nov 8 00:20:21.966595 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:20:21.966742 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:20:21.966886 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:20:21.967032 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 8 00:20:21.967177 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 8 00:20:21.967328 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Nov 8 00:20:21.967344 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:20:21.967355 kernel: Initialise system trusted keyrings Nov 8 00:20:21.967366 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:20:21.967376 kernel: Key type asymmetric registered Nov 8 00:20:21.967397 kernel: Asymmetric key parser 'x509' registered Nov 8 00:20:21.967407 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:20:21.967417 kernel: io scheduler mq-deadline registered Nov 8 00:20:21.967427 kernel: io scheduler kyber registered Nov 8 00:20:21.967439 kernel: io scheduler bfq registered Nov 8 00:20:21.967447 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:20:21.967455 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 8 00:20:21.967463 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 8 00:20:21.967470 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 8 00:20:21.967478 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:20:21.967486 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:20:21.967493 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:20:21.967501 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:20:21.967511 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:20:21.967519 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:20:21.967726 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 8 00:20:21.967843 kernel: rtc_cmos 00:04: registered as rtc0 Nov 8 00:20:21.967954 kernel: rtc_cmos 00:04: setting system clock to 2025-11-08T00:20:21 UTC (1762561221) Nov 8 00:20:21.968066 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 8 00:20:21.968076 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 8 00:20:21.968084 kernel: efifb: probing for efifb Nov 8 00:20:21.968097 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Nov 8 00:20:21.968106 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Nov 8 00:20:21.968114 kernel: efifb: scrolling: redraw Nov 8 00:20:21.968124 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Nov 8 00:20:21.968131 kernel: Console: switching to colour frame buffer device 100x37 Nov 8 00:20:21.968139 kernel: fb0: EFI VGA frame buffer device Nov 8 00:20:21.968168 kernel: pstore: Using crash dump compression: deflate Nov 8 00:20:21.968179 kernel: pstore: Registered efi_pstore as persistent store backend Nov 8 00:20:21.968186 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:20:21.968197 kernel: Segment Routing with IPv6 Nov 8 00:20:21.968205 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:20:21.968212 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:20:21.968220 kernel: Key type dns_resolver registered Nov 8 00:20:21.968228 kernel: IPI shorthand broadcast: enabled Nov 8 00:20:21.968236 kernel: sched_clock: Marking stable (2115002188, 201177778)->(2367005855, -50825889) Nov 8 00:20:21.968244 kernel: registered taskstats version 1 Nov 8 00:20:21.968251 kernel: Loading compiled-in X.509 certificates Nov 8 00:20:21.968259 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:20:21.968270 kernel: Key type .fscrypt registered Nov 8 00:20:21.968278 kernel: Key type fscrypt-provisioning registered Nov 8 00:20:21.968285 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:20:21.968293 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:20:21.968301 kernel: ima: No architecture policies found Nov 8 00:20:21.968308 kernel: clk: Disabling unused clocks Nov 8 00:20:21.968316 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:20:21.968324 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:20:21.968334 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:20:21.968345 kernel: Run /init as init process Nov 8 00:20:21.968352 kernel: with arguments: Nov 8 00:20:21.968360 kernel: /init Nov 8 00:20:21.968368 kernel: with environment: Nov 8 00:20:21.968375 kernel: HOME=/ Nov 8 00:20:21.968391 kernel: TERM=linux Nov 8 00:20:21.968402 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:20:21.968412 systemd[1]: Detected virtualization kvm. Nov 8 00:20:21.968423 systemd[1]: Detected architecture x86-64. Nov 8 00:20:21.968432 systemd[1]: Running in initrd. Nov 8 00:20:21.968442 systemd[1]: No hostname configured, using default hostname. Nov 8 00:20:21.968451 systemd[1]: Hostname set to . Nov 8 00:20:21.968459 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:20:21.968470 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:20:21.968479 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:20:21.968487 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:20:21.968496 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:20:21.968505 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:20:21.968513 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:20:21.968522 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:20:21.968535 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:20:21.968544 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:20:21.968552 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:20:21.968561 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:20:21.968569 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:20:21.968588 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:20:21.968597 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:20:21.968606 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:20:21.968617 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:20:21.968626 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:20:21.968634 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:20:21.968643 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:20:21.968651 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:20:21.968660 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:20:21.968668 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:20:21.968677 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:20:21.968685 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:20:21.968696 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:20:21.968704 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:20:21.968713 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:20:21.968721 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:20:21.968729 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:20:21.968738 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:20:21.968746 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:20:21.968755 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:20:21.968766 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:20:21.968775 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:20:21.968802 systemd-journald[192]: Collecting audit messages is disabled. Nov 8 00:20:21.968825 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:21.968834 systemd-journald[192]: Journal started Nov 8 00:20:21.968852 systemd-journald[192]: Runtime Journal (/run/log/journal/caa480b81e5248b8b18b22e75f5ef480) is 6.0M, max 48.3M, 42.2M free. Nov 8 00:20:21.973332 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:20:21.948127 systemd-modules-load[193]: Inserted module 'overlay' Nov 8 00:20:21.976134 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:20:21.978529 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:20:21.985613 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:20:21.986690 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:20:21.990090 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:20:21.992833 kernel: Bridge firewalling registered Nov 8 00:20:21.990916 systemd-modules-load[193]: Inserted module 'br_netfilter' Nov 8 00:20:21.992973 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:20:22.000217 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:20:22.005727 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:20:22.007935 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:20:22.011143 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:20:22.014839 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:20:22.027905 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:20:22.029496 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:20:22.040040 dracut-cmdline[221]: dracut-dracut-053 Nov 8 00:20:22.043171 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:20:22.063837 systemd-resolved[230]: Positive Trust Anchors: Nov 8 00:20:22.063853 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:20:22.063884 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:20:22.066420 systemd-resolved[230]: Defaulting to hostname 'linux'. Nov 8 00:20:22.067541 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:20:22.069633 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:20:22.127633 kernel: SCSI subsystem initialized Nov 8 00:20:22.136609 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:20:22.147608 kernel: iscsi: registered transport (tcp) Nov 8 00:20:22.170181 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:20:22.170205 kernel: QLogic iSCSI HBA Driver Nov 8 00:20:22.221089 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:20:22.234718 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:20:22.259714 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:20:22.259752 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:20:22.261289 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:20:22.302617 kernel: raid6: avx2x4 gen() 30439 MB/s Nov 8 00:20:22.319612 kernel: raid6: avx2x2 gen() 30735 MB/s Nov 8 00:20:22.337332 kernel: raid6: avx2x1 gen() 25948 MB/s Nov 8 00:20:22.337363 kernel: raid6: using algorithm avx2x2 gen() 30735 MB/s Nov 8 00:20:22.355377 kernel: raid6: .... xor() 19992 MB/s, rmw enabled Nov 8 00:20:22.355405 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:20:22.376618 kernel: xor: automatically using best checksumming function avx Nov 8 00:20:22.533622 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:20:22.549621 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:20:22.562870 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:20:22.578354 systemd-udevd[411]: Using default interface naming scheme 'v255'. Nov 8 00:20:22.582943 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:20:22.592761 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:20:22.606471 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Nov 8 00:20:22.639335 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:20:22.648770 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:20:22.711313 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:20:22.716780 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:20:22.730829 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:20:22.735255 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:20:22.737761 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:20:22.744022 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:20:22.755718 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 8 00:20:22.756029 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:20:22.765718 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 8 00:20:22.765923 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:20:22.765704 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:20:22.773637 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:20:22.773653 kernel: GPT:9289727 != 19775487 Nov 8 00:20:22.773663 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:20:22.774488 kernel: GPT:9289727 != 19775487 Nov 8 00:20:22.775722 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:20:22.775744 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:20:22.780340 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:20:22.780512 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:20:22.788561 kernel: libata version 3.00 loaded. Nov 8 00:20:22.786560 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:20:22.797437 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:20:22.797489 kernel: AES CTR mode by8 optimization enabled Nov 8 00:20:22.790751 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:20:22.790940 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:22.795660 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:20:22.806669 kernel: ahci 0000:00:1f.2: version 3.0 Nov 8 00:20:22.811345 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 8 00:20:22.809246 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:20:22.823605 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 8 00:20:22.823788 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 8 00:20:22.825624 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:22.831124 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (473) Nov 8 00:20:22.835625 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (467) Nov 8 00:20:22.839598 kernel: scsi host0: ahci Nov 8 00:20:22.841395 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 8 00:20:22.849036 kernel: scsi host1: ahci Nov 8 00:20:22.849206 kernel: scsi host2: ahci Nov 8 00:20:22.849350 kernel: scsi host3: ahci Nov 8 00:20:22.849507 kernel: scsi host4: ahci Nov 8 00:20:22.849671 kernel: scsi host5: ahci Nov 8 00:20:22.849842 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Nov 8 00:20:22.849853 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Nov 8 00:20:22.851607 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Nov 8 00:20:22.851622 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Nov 8 00:20:22.854296 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Nov 8 00:20:22.854309 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Nov 8 00:20:22.857089 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 8 00:20:22.863717 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:20:22.869462 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 8 00:20:22.871482 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 8 00:20:22.888709 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:20:22.891215 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:20:22.898375 disk-uuid[562]: Primary Header is updated. Nov 8 00:20:22.898375 disk-uuid[562]: Secondary Entries is updated. Nov 8 00:20:22.898375 disk-uuid[562]: Secondary Header is updated. Nov 8 00:20:22.903523 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:20:22.903540 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:20:22.914047 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:20:23.169600 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:20:23.169660 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:20:23.170621 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 8 00:20:23.171606 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 8 00:20:23.172597 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:20:23.173608 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 8 00:20:23.175734 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 8 00:20:23.175746 kernel: ata3.00: applying bridge limits Nov 8 00:20:23.176616 kernel: ata3.00: configured for UDMA/100 Nov 8 00:20:23.179602 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 8 00:20:23.222170 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 8 00:20:23.222464 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:20:23.234609 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:20:23.960609 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:20:23.960667 disk-uuid[563]: The operation has completed successfully. Nov 8 00:20:23.985515 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:20:23.985668 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:20:24.013735 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:20:24.017780 sh[589]: Success Nov 8 00:20:24.029617 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 8 00:20:24.061364 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:20:24.085012 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:20:24.089212 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:20:24.106378 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:20:24.106418 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:20:24.106433 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:20:24.109588 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:20:24.109610 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:20:24.114672 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:20:24.115276 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:20:24.126694 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:20:24.128789 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:20:24.168906 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:20:24.168970 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:20:24.168981 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:20:24.191600 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:20:24.200411 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:20:24.203088 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:20:24.238709 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:20:24.260877 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:20:24.281489 systemd-networkd[767]: lo: Link UP Nov 8 00:20:24.281500 systemd-networkd[767]: lo: Gained carrier Nov 8 00:20:24.283026 systemd-networkd[767]: Enumeration completed Nov 8 00:20:24.283126 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:20:24.283421 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:20:24.283425 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:20:24.284336 systemd-networkd[767]: eth0: Link UP Nov 8 00:20:24.284340 systemd-networkd[767]: eth0: Gained carrier Nov 8 00:20:24.284347 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:20:24.285963 systemd[1]: Reached target network.target - Network. Nov 8 00:20:24.301626 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 8 00:20:24.443497 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:20:24.478772 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:20:24.527172 ignition[772]: Ignition 2.19.0 Nov 8 00:20:24.527185 ignition[772]: Stage: fetch-offline Nov 8 00:20:24.527227 ignition[772]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:20:24.527237 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:20:24.527344 ignition[772]: parsed url from cmdline: "" Nov 8 00:20:24.527348 ignition[772]: no config URL provided Nov 8 00:20:24.527353 ignition[772]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:20:24.527363 ignition[772]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:20:24.527395 ignition[772]: op(1): [started] loading QEMU firmware config module Nov 8 00:20:24.527401 ignition[772]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 8 00:20:24.538555 ignition[772]: op(1): [finished] loading QEMU firmware config module Nov 8 00:20:24.584683 systemd-resolved[230]: Detected conflict on linux IN A 10.0.0.49 Nov 8 00:20:24.584700 systemd-resolved[230]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Nov 8 00:20:24.620603 ignition[772]: parsing config with SHA512: a9f7bed68ad349c233df38e25ae29ce4a642315c1f9865000f0eafdfccb00b0eb1a4578cb95e6b6821400cbfe2144855651723443c174ee4942384738f748772 Nov 8 00:20:24.624216 unknown[772]: fetched base config from "system" Nov 8 00:20:24.624456 unknown[772]: fetched user config from "qemu" Nov 8 00:20:24.626062 ignition[772]: fetch-offline: fetch-offline passed Nov 8 00:20:24.626177 ignition[772]: Ignition finished successfully Nov 8 00:20:24.629006 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:20:24.631766 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 8 00:20:24.637884 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:20:24.651017 ignition[783]: Ignition 2.19.0 Nov 8 00:20:24.651027 ignition[783]: Stage: kargs Nov 8 00:20:24.651174 ignition[783]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:20:24.651186 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:20:24.655283 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:20:24.652024 ignition[783]: kargs: kargs passed Nov 8 00:20:24.652061 ignition[783]: Ignition finished successfully Nov 8 00:20:24.662705 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:20:24.675622 ignition[790]: Ignition 2.19.0 Nov 8 00:20:24.675633 ignition[790]: Stage: disks Nov 8 00:20:24.675797 ignition[790]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:20:24.675808 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:20:24.676597 ignition[790]: disks: disks passed Nov 8 00:20:24.676641 ignition[790]: Ignition finished successfully Nov 8 00:20:24.684900 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:20:24.688151 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:20:24.688239 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:20:24.691619 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:20:24.692151 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:20:24.698195 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:20:24.714758 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:20:24.757398 systemd-fsck[800]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:20:25.023387 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:20:25.039685 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:20:25.169605 kernel: EXT4-fs (vda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:20:25.169761 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:20:25.171688 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:20:25.185656 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:20:25.187842 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:20:25.199454 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (809) Nov 8 00:20:25.199475 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:20:25.199492 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:20:25.199503 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:20:25.190656 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:20:25.205027 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:20:25.190694 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:20:25.190714 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:20:25.200526 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:20:25.206353 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:20:25.211721 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:20:25.247766 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:20:25.252135 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:20:25.257909 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:20:25.261554 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:20:25.349336 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:20:25.368672 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:20:25.372604 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:20:25.378396 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:20:25.381655 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:20:25.394950 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:20:25.440349 ignition[928]: INFO : Ignition 2.19.0 Nov 8 00:20:25.440349 ignition[928]: INFO : Stage: mount Nov 8 00:20:25.443319 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:20:25.443319 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:20:25.447614 ignition[928]: INFO : mount: mount passed Nov 8 00:20:25.448977 ignition[928]: INFO : Ignition finished successfully Nov 8 00:20:25.452568 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:20:25.469768 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:20:25.769769 systemd-networkd[767]: eth0: Gained IPv6LL Nov 8 00:20:26.182814 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:20:26.195753 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Nov 8 00:20:26.195783 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:20:26.195794 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:20:26.198527 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:20:26.202611 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:20:26.203702 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:20:26.230464 ignition[954]: INFO : Ignition 2.19.0 Nov 8 00:20:26.230464 ignition[954]: INFO : Stage: files Nov 8 00:20:26.233563 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:20:26.233563 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:20:26.233563 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:20:26.233563 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:20:26.233563 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:20:26.245513 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:20:26.245513 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:20:26.245513 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:20:26.245513 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:20:26.245513 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:20:26.235446 unknown[954]: wrote ssh authorized keys file for user: core Nov 8 00:20:26.278945 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:20:26.344821 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:20:26.344821 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:20:26.350975 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:20:26.350975 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:20:26.350975 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:20:26.350975 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:20:26.350975 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:20:26.350975 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:20:26.350975 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:20:26.350975 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:20:26.350975 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:20:26.350975 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:20:26.350975 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:20:26.350975 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:20:26.350975 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 8 00:20:26.795272 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:20:27.177940 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:20:27.177940 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:20:27.184377 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:20:27.184377 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:20:27.184377 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:20:27.184377 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 8 00:20:27.184377 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:20:27.184377 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:20:27.184377 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 8 00:20:27.184377 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 8 00:20:27.211185 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:20:27.211185 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:20:27.211185 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 8 00:20:27.211185 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:20:27.211185 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:20:27.211185 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:20:27.211185 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:20:27.211185 ignition[954]: INFO : files: files passed Nov 8 00:20:27.211185 ignition[954]: INFO : Ignition finished successfully Nov 8 00:20:27.206941 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:20:27.222785 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:20:27.227107 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:20:27.231723 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:20:27.250696 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory Nov 8 00:20:27.231836 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:20:27.257485 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:20:27.257485 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:20:27.241842 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:20:27.265684 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:20:27.244887 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:20:27.253711 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:20:27.279076 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:20:27.279207 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:20:27.283051 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:20:27.286754 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:20:27.290098 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:20:27.305742 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:20:27.319094 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:20:27.325778 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:20:27.334435 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:20:27.336426 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:20:27.340049 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:20:27.343372 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:20:27.343490 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:20:27.347094 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:20:27.349945 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:20:27.353301 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:20:27.356642 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:20:27.359946 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:20:27.363544 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:20:27.367018 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:20:27.370790 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:20:27.374122 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:20:27.377738 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:20:27.380679 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:20:27.380813 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:20:27.384365 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:20:27.387041 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:20:27.390484 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:20:27.390607 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:20:27.394106 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:20:27.394219 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:20:27.397909 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:20:27.398018 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:20:27.401361 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:20:27.404241 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:20:27.407643 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:20:27.410191 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:20:27.413379 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:20:27.416589 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:20:27.416688 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:20:27.420008 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:20:27.420099 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:20:27.423030 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:20:27.423145 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:20:27.426740 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:20:27.426845 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:20:27.438734 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:20:27.454820 ignition[1008]: INFO : Ignition 2.19.0 Nov 8 00:20:27.454820 ignition[1008]: INFO : Stage: umount Nov 8 00:20:27.454820 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:20:27.454820 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:20:27.454820 ignition[1008]: INFO : umount: umount passed Nov 8 00:20:27.454820 ignition[1008]: INFO : Ignition finished successfully Nov 8 00:20:27.443733 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:20:27.446344 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:20:27.446472 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:20:27.449950 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:20:27.450062 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:20:27.460876 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:20:27.461000 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:20:27.465344 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:20:27.465453 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:20:27.468924 systemd[1]: Stopped target network.target - Network. Nov 8 00:20:27.471775 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:20:27.471838 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:20:27.474711 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:20:27.474759 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:20:27.477843 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:20:27.477890 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:20:27.481046 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:20:27.481096 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:20:27.484926 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:20:27.488327 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:20:27.492695 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:20:27.499626 systemd-networkd[767]: eth0: DHCPv6 lease lost Nov 8 00:20:27.501813 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:20:27.501949 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:20:27.506274 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:20:27.506415 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:20:27.509272 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:20:27.509330 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:20:27.520945 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:20:27.523308 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:20:27.523364 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:20:27.527123 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:20:27.527172 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:20:27.530419 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:20:27.530467 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:20:27.532259 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:20:27.532308 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:20:27.535752 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:20:27.549076 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:20:27.549204 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:20:27.558134 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:20:27.558318 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:20:27.561758 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:20:27.561805 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:20:27.565228 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:20:27.565282 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:20:27.568708 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:20:27.568758 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:20:27.572870 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:20:27.572919 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:20:27.576696 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:20:27.576744 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:20:27.591718 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:20:27.593544 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:20:27.593613 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:20:27.597192 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:20:27.597241 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:20:27.601263 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:20:27.601313 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:20:27.603514 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:20:27.603564 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:27.607909 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:20:27.608012 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:20:27.688358 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:20:27.688495 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:20:27.691674 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:20:27.694517 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:20:27.694574 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:20:27.715790 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:20:27.726010 systemd[1]: Switching root. Nov 8 00:20:27.757510 systemd-journald[192]: Journal stopped Nov 8 00:20:29.003060 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Nov 8 00:20:29.003131 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:20:29.003145 kernel: SELinux: policy capability open_perms=1 Nov 8 00:20:29.003160 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:20:29.003171 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:20:29.003182 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:20:29.003193 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:20:29.003204 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:20:29.003223 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:20:29.003235 kernel: audit: type=1403 audit(1762561228.192:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:20:29.003247 systemd[1]: Successfully loaded SELinux policy in 41.447ms. Nov 8 00:20:29.003279 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.412ms. Nov 8 00:20:29.003295 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:20:29.003308 systemd[1]: Detected virtualization kvm. Nov 8 00:20:29.003321 systemd[1]: Detected architecture x86-64. Nov 8 00:20:29.003333 systemd[1]: Detected first boot. Nov 8 00:20:29.003349 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:20:29.003361 zram_generator::config[1052]: No configuration found. Nov 8 00:20:29.003374 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:20:29.003386 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:20:29.003401 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:20:29.003413 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:20:29.003425 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:20:29.003437 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:20:29.003450 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:20:29.003462 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:20:29.003522 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:20:29.003537 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:20:29.003553 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:20:29.003565 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:20:29.003589 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:20:29.003602 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:20:29.003615 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:20:29.003628 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:20:29.003641 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:20:29.003653 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:20:29.003666 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:20:29.003681 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:20:29.003693 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:20:29.003710 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:20:29.003722 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:20:29.003734 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:20:29.003746 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:20:29.003758 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:20:29.003771 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:20:29.003785 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:20:29.003798 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:20:29.003810 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:20:29.003822 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:20:29.003834 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:20:29.003846 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:20:29.003858 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:20:29.003870 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:20:29.003882 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:20:29.003897 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:20:29.003909 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:29.003921 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:20:29.003933 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:20:29.003945 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:20:29.003958 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:20:29.003970 systemd[1]: Reached target machines.target - Containers. Nov 8 00:20:29.003982 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:20:29.003994 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:20:29.004009 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:20:29.004021 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:20:29.004034 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:20:29.004046 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:20:29.004058 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:20:29.004070 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:20:29.004082 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:20:29.004094 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:20:29.004110 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:20:29.004122 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:20:29.004134 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:20:29.004146 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:20:29.004160 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:20:29.004172 kernel: loop: module loaded Nov 8 00:20:29.004184 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:20:29.004196 kernel: fuse: init (API version 7.39) Nov 8 00:20:29.004207 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:20:29.004231 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:20:29.004244 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:20:29.004256 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:20:29.004268 systemd[1]: Stopped verity-setup.service. Nov 8 00:20:29.004281 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:29.004292 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:20:29.004305 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:20:29.004317 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:20:29.004331 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:20:29.004344 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:20:29.004356 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:20:29.004368 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:20:29.004380 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:20:29.004395 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:20:29.004407 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:20:29.004419 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:20:29.004433 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:20:29.004446 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:20:29.004475 systemd-journald[1115]: Collecting audit messages is disabled. Nov 8 00:20:29.004496 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:20:29.004508 kernel: ACPI: bus type drm_connector registered Nov 8 00:20:29.004523 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:20:29.004539 systemd-journald[1115]: Journal started Nov 8 00:20:29.004560 systemd-journald[1115]: Runtime Journal (/run/log/journal/caa480b81e5248b8b18b22e75f5ef480) is 6.0M, max 48.3M, 42.2M free. Nov 8 00:20:28.686012 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:20:28.703176 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 8 00:20:28.703631 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:20:29.007823 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:20:29.009197 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:20:29.009397 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:20:29.011360 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:20:29.011535 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:20:29.013529 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:20:29.015541 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:20:29.017980 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:20:29.032956 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:20:29.039661 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:20:29.042522 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:20:29.044278 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:20:29.044308 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:20:29.046893 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:20:29.061772 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:20:29.064917 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:20:29.066621 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:20:29.070981 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:20:29.078319 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:20:29.080547 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:20:29.082799 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:20:29.087431 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:20:29.090936 systemd-journald[1115]: Time spent on flushing to /var/log/journal/caa480b81e5248b8b18b22e75f5ef480 is 17.841ms for 988 entries. Nov 8 00:20:29.090936 systemd-journald[1115]: System Journal (/var/log/journal/caa480b81e5248b8b18b22e75f5ef480) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:20:29.140324 systemd-journald[1115]: Received client request to flush runtime journal. Nov 8 00:20:29.140372 kernel: loop0: detected capacity change from 0 to 142488 Nov 8 00:20:29.090758 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:20:29.100098 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:20:29.104659 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:20:29.109412 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:20:29.113971 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:20:29.123762 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:20:29.128072 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:20:29.130854 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:20:29.133533 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:20:29.135802 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:20:29.144877 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:20:29.149627 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:20:29.153781 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:20:29.159893 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Nov 8 00:20:29.159913 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Nov 8 00:20:29.164031 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:20:29.167475 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:20:29.171544 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:20:29.179611 kernel: loop1: detected capacity change from 0 to 229808 Nov 8 00:20:29.182004 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:20:29.191353 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:20:29.227351 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:20:29.241178 kernel: loop2: detected capacity change from 0 to 140768 Nov 8 00:20:29.236798 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:20:29.261621 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Nov 8 00:20:29.262060 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Nov 8 00:20:29.267779 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:20:29.304603 kernel: loop3: detected capacity change from 0 to 142488 Nov 8 00:20:29.464612 kernel: loop4: detected capacity change from 0 to 229808 Nov 8 00:20:29.474613 kernel: loop5: detected capacity change from 0 to 140768 Nov 8 00:20:29.485214 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 8 00:20:29.486370 (sd-merge)[1193]: Merged extensions into '/usr'. Nov 8 00:20:29.505372 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:20:29.506156 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:20:29.511295 systemd[1]: Reloading requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:20:29.511375 systemd[1]: Reloading... Nov 8 00:20:29.557602 zram_generator::config[1217]: No configuration found. Nov 8 00:20:29.599670 ldconfig[1153]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:20:29.682479 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:20:29.730702 systemd[1]: Reloading finished in 218 ms. Nov 8 00:20:29.771223 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:20:29.783861 systemd[1]: Starting ensure-sysext.service... Nov 8 00:20:29.786435 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:20:29.790843 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:20:29.790856 systemd[1]: Reloading... Nov 8 00:20:29.810025 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:20:29.810409 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:20:29.811423 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:20:29.811744 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Nov 8 00:20:29.811947 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Nov 8 00:20:29.818861 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:20:29.818874 systemd-tmpfiles[1257]: Skipping /boot Nov 8 00:20:29.834039 zram_generator::config[1284]: No configuration found. Nov 8 00:20:29.833663 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:20:29.833670 systemd-tmpfiles[1257]: Skipping /boot Nov 8 00:20:29.943995 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:20:29.992502 systemd[1]: Reloading finished in 201 ms. Nov 8 00:20:30.013386 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:20:30.026053 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:20:30.035410 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:20:30.038645 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:20:30.041739 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:20:30.045706 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:20:30.047528 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:20:30.051509 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:30.052741 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:20:30.058912 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:20:30.063840 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:20:30.067707 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:20:30.069697 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:20:30.069864 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:30.071010 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:20:30.071235 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:20:30.073990 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:20:30.074171 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:20:30.077050 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:20:30.077231 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:20:30.081559 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:20:30.088907 augenrules[1348]: No rules Nov 8 00:20:30.090727 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:20:30.093339 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:20:30.100258 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:30.100514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:20:30.106827 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:20:30.109652 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:20:30.112372 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:20:30.116803 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:20:30.118641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:20:30.120570 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:20:30.122570 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:30.123927 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:20:30.126653 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:20:30.126816 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:20:30.129159 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:20:30.129331 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:20:30.131516 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:20:30.131704 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:20:30.134278 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:20:30.134439 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:20:30.137822 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:20:30.140200 systemd[1]: Finished ensure-sysext.service. Nov 8 00:20:30.148791 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:20:30.148898 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:20:30.154725 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:20:30.157811 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:20:30.160728 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:20:30.162368 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:20:30.173325 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:20:30.176663 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:20:30.191692 systemd-udevd[1371]: Using default interface naming scheme 'v255'. Nov 8 00:20:30.208904 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:20:30.221818 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:20:30.252613 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:20:30.254756 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:20:30.262258 systemd-resolved[1328]: Positive Trust Anchors: Nov 8 00:20:30.262285 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:20:30.262327 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:20:30.270100 systemd-resolved[1328]: Defaulting to hostname 'linux'. Nov 8 00:20:30.272158 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:20:30.274518 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:20:30.280943 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1384) Nov 8 00:20:30.281110 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:20:30.295169 systemd-networkd[1388]: lo: Link UP Nov 8 00:20:30.295514 systemd-networkd[1388]: lo: Gained carrier Nov 8 00:20:30.297834 systemd-networkd[1388]: Enumeration completed Nov 8 00:20:30.298236 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:20:30.298241 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:20:30.298659 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:20:30.302361 systemd-networkd[1388]: eth0: Link UP Nov 8 00:20:30.302618 systemd-networkd[1388]: eth0: Gained carrier Nov 8 00:20:30.302696 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:20:30.304110 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:20:30.308284 systemd[1]: Reached target network.target - Network. Nov 8 00:20:30.317242 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:20:30.317367 systemd-networkd[1388]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 8 00:20:30.318425 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Nov 8 00:20:30.945329 systemd-timesyncd[1370]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 8 00:20:30.945369 systemd-timesyncd[1370]: Initial clock synchronization to Sat 2025-11-08 00:20:30.945234 UTC. Nov 8 00:20:30.945839 systemd-resolved[1328]: Clock change detected. Flushing caches. Nov 8 00:20:30.948021 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:20:30.950593 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:20:30.960070 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:20:30.967896 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 8 00:20:30.973833 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:20:30.986394 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 8 00:20:30.993724 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 8 00:20:30.994098 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 8 00:20:30.994265 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 8 00:20:30.994451 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 8 00:20:31.039838 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:20:31.041934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:20:31.050110 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:20:31.050313 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:31.056127 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:20:31.108420 kernel: kvm_amd: TSC scaling supported Nov 8 00:20:31.108481 kernel: kvm_amd: Nested Virtualization enabled Nov 8 00:20:31.108512 kernel: kvm_amd: Nested Paging enabled Nov 8 00:20:31.109276 kernel: kvm_amd: LBR virtualization supported Nov 8 00:20:31.110208 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 8 00:20:31.111119 kernel: kvm_amd: Virtual GIF supported Nov 8 00:20:31.133842 kernel: EDAC MC: Ver: 3.0.0 Nov 8 00:20:31.135731 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:31.156959 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:20:31.193979 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:20:31.204799 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:20:31.237399 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:20:31.240285 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:20:31.242081 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:20:31.243940 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:20:31.246022 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:20:31.248192 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:20:31.250008 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:20:31.251977 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:20:31.253942 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:20:31.253975 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:20:31.255413 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:20:31.264370 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:20:31.268031 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:20:31.279337 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:20:31.282293 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:20:31.284580 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:20:31.286353 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:20:31.287875 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:20:31.289348 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:20:31.289375 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:20:31.290344 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:20:31.292965 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:20:31.296908 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:20:31.300257 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:20:31.301909 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:20:31.304482 jq[1435]: false Nov 8 00:20:31.304898 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:20:31.306625 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:20:31.307944 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:20:31.314061 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:20:31.317587 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:20:31.325087 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:20:31.328330 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:20:31.328836 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:20:31.334500 dbus-daemon[1434]: [system] SELinux support is enabled Nov 8 00:20:31.334994 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:20:31.337952 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:20:31.340575 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:20:31.344971 extend-filesystems[1436]: Found loop3 Nov 8 00:20:31.344971 extend-filesystems[1436]: Found loop4 Nov 8 00:20:31.344971 extend-filesystems[1436]: Found loop5 Nov 8 00:20:31.344971 extend-filesystems[1436]: Found sr0 Nov 8 00:20:31.344971 extend-filesystems[1436]: Found vda Nov 8 00:20:31.344971 extend-filesystems[1436]: Found vda1 Nov 8 00:20:31.344971 extend-filesystems[1436]: Found vda2 Nov 8 00:20:31.344971 extend-filesystems[1436]: Found vda3 Nov 8 00:20:31.344971 extend-filesystems[1436]: Found usr Nov 8 00:20:31.344971 extend-filesystems[1436]: Found vda4 Nov 8 00:20:31.344971 extend-filesystems[1436]: Found vda6 Nov 8 00:20:31.344971 extend-filesystems[1436]: Found vda7 Nov 8 00:20:31.344971 extend-filesystems[1436]: Found vda9 Nov 8 00:20:31.344971 extend-filesystems[1436]: Checking size of /dev/vda9 Nov 8 00:20:31.352874 update_engine[1449]: I20251108 00:20:31.351634 1449 main.cc:92] Flatcar Update Engine starting Nov 8 00:20:31.353541 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:20:31.354622 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:20:31.355426 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:20:31.356673 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:20:31.356969 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:20:31.361016 update_engine[1449]: I20251108 00:20:31.360965 1449 update_check_scheduler.cc:74] Next update check in 2m26s Nov 8 00:20:31.363704 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:20:31.364972 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:20:31.370053 jq[1452]: true Nov 8 00:20:31.379770 extend-filesystems[1436]: Resized partition /dev/vda9 Nov 8 00:20:31.383644 jq[1460]: true Nov 8 00:20:31.390411 extend-filesystems[1469]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:20:31.392601 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1381) Nov 8 00:20:31.393033 (ntainerd)[1457]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:20:31.395570 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:20:31.397990 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:20:31.398026 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:20:31.407703 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 8 00:20:31.407728 tar[1456]: linux-amd64/LICENSE Nov 8 00:20:31.407728 tar[1456]: linux-amd64/helm Nov 8 00:20:31.401894 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:20:31.401911 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:20:31.412331 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:20:31.440333 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:20:31.440568 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:20:31.441591 systemd-logind[1447]: New seat seat0. Nov 8 00:20:31.444104 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:20:31.452200 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 8 00:20:31.467010 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:20:31.478656 extend-filesystems[1469]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 8 00:20:31.478656 extend-filesystems[1469]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 8 00:20:31.478656 extend-filesystems[1469]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 8 00:20:31.486507 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Nov 8 00:20:31.487951 bash[1488]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:20:31.482938 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:20:31.488390 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:20:31.491611 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:20:31.495541 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 8 00:20:31.603205 containerd[1457]: time="2025-11-08T00:20:31.603103724Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:20:31.626225 containerd[1457]: time="2025-11-08T00:20:31.626129116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:31.627874 containerd[1457]: time="2025-11-08T00:20:31.627835065Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:20:31.627874 containerd[1457]: time="2025-11-08T00:20:31.627862356Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:20:31.627923 containerd[1457]: time="2025-11-08T00:20:31.627876623Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:20:31.628086 containerd[1457]: time="2025-11-08T00:20:31.628056250Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:20:31.628086 containerd[1457]: time="2025-11-08T00:20:31.628077329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:31.628159 containerd[1457]: time="2025-11-08T00:20:31.628138344Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:20:31.628159 containerd[1457]: time="2025-11-08T00:20:31.628154143Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:31.628350 containerd[1457]: time="2025-11-08T00:20:31.628317419Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:20:31.628350 containerd[1457]: time="2025-11-08T00:20:31.628336856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:31.628409 containerd[1457]: time="2025-11-08T00:20:31.628350131Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:20:31.628409 containerd[1457]: time="2025-11-08T00:20:31.628360420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:31.628492 containerd[1457]: time="2025-11-08T00:20:31.628470186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:31.628725 containerd[1457]: time="2025-11-08T00:20:31.628694176Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:31.628859 containerd[1457]: time="2025-11-08T00:20:31.628836242Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:20:31.628859 containerd[1457]: time="2025-11-08T00:20:31.628854707Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:20:31.628971 containerd[1457]: time="2025-11-08T00:20:31.628951448Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:20:31.629025 containerd[1457]: time="2025-11-08T00:20:31.629007483Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:20:31.634678 containerd[1457]: time="2025-11-08T00:20:31.634648190Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:20:31.634712 containerd[1457]: time="2025-11-08T00:20:31.634694707Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:20:31.634743 containerd[1457]: time="2025-11-08T00:20:31.634716528Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:20:31.634743 containerd[1457]: time="2025-11-08T00:20:31.634732347Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:20:31.634793 containerd[1457]: time="2025-11-08T00:20:31.634746394Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:20:31.634912 containerd[1457]: time="2025-11-08T00:20:31.634874975Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:20:31.635127 containerd[1457]: time="2025-11-08T00:20:31.635108212Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:20:31.635231 containerd[1457]: time="2025-11-08T00:20:31.635212949Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:20:31.635256 containerd[1457]: time="2025-11-08T00:20:31.635231052Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:20:31.635256 containerd[1457]: time="2025-11-08T00:20:31.635245459Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:20:31.635256 containerd[1457]: time="2025-11-08T00:20:31.635260558Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:20:31.635331 containerd[1457]: time="2025-11-08T00:20:31.635273873Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:20:31.635331 containerd[1457]: time="2025-11-08T00:20:31.635286136Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:20:31.635331 containerd[1457]: time="2025-11-08T00:20:31.635297778Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:20:31.635382 containerd[1457]: time="2025-11-08T00:20:31.635337372Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:20:31.635382 containerd[1457]: time="2025-11-08T00:20:31.635353342Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:20:31.635382 containerd[1457]: time="2025-11-08T00:20:31.635366436Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:20:31.635382 containerd[1457]: time="2025-11-08T00:20:31.635377988Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:20:31.635466 containerd[1457]: time="2025-11-08T00:20:31.635407293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:20:31.635466 containerd[1457]: time="2025-11-08T00:20:31.635420979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:20:31.635466 containerd[1457]: time="2025-11-08T00:20:31.635434053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:20:31.635466 containerd[1457]: time="2025-11-08T00:20:31.635445404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:20:31.635466 containerd[1457]: time="2025-11-08T00:20:31.635457307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:20:31.635560 containerd[1457]: time="2025-11-08T00:20:31.635469620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:20:31.635560 containerd[1457]: time="2025-11-08T00:20:31.635482474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:20:31.635560 containerd[1457]: time="2025-11-08T00:20:31.635495108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:20:31.635560 containerd[1457]: time="2025-11-08T00:20:31.635507851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:20:31.635560 containerd[1457]: time="2025-11-08T00:20:31.635531776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:20:31.635560 containerd[1457]: time="2025-11-08T00:20:31.635543308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:20:31.635560 containerd[1457]: time="2025-11-08T00:20:31.635555861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:20:31.635681 containerd[1457]: time="2025-11-08T00:20:31.635568255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:20:31.635681 containerd[1457]: time="2025-11-08T00:20:31.635583854Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:20:31.635681 containerd[1457]: time="2025-11-08T00:20:31.635610394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:20:31.635681 containerd[1457]: time="2025-11-08T00:20:31.635622386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:20:31.635681 containerd[1457]: time="2025-11-08T00:20:31.635633507Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:20:31.635681 containerd[1457]: time="2025-11-08T00:20:31.635676428Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:20:31.635791 containerd[1457]: time="2025-11-08T00:20:31.635692337Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:20:31.635791 containerd[1457]: time="2025-11-08T00:20:31.635703418Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:20:31.635791 containerd[1457]: time="2025-11-08T00:20:31.635715060Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:20:31.635791 containerd[1457]: time="2025-11-08T00:20:31.635724798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:20:31.635791 containerd[1457]: time="2025-11-08T00:20:31.635736400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:20:31.635791 containerd[1457]: time="2025-11-08T00:20:31.635745918Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:20:31.635791 containerd[1457]: time="2025-11-08T00:20:31.635756548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:20:31.636235 containerd[1457]: time="2025-11-08T00:20:31.636180964Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:20:31.636235 containerd[1457]: time="2025-11-08T00:20:31.636232781Z" level=info msg="Connect containerd service" Nov 8 00:20:31.636391 containerd[1457]: time="2025-11-08T00:20:31.636262987Z" level=info msg="using legacy CRI server" Nov 8 00:20:31.636391 containerd[1457]: time="2025-11-08T00:20:31.636270461Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:20:31.636391 containerd[1457]: time="2025-11-08T00:20:31.636361272Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:20:31.637123 containerd[1457]: time="2025-11-08T00:20:31.637086511Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:20:31.637233 containerd[1457]: time="2025-11-08T00:20:31.637202669Z" level=info msg="Start subscribing containerd event" Nov 8 00:20:31.637258 containerd[1457]: time="2025-11-08T00:20:31.637251100Z" level=info msg="Start recovering state" Nov 8 00:20:31.639010 containerd[1457]: time="2025-11-08T00:20:31.638987726Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:20:31.639065 containerd[1457]: time="2025-11-08T00:20:31.639043781Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:20:31.639124 containerd[1457]: time="2025-11-08T00:20:31.639107040Z" level=info msg="Start event monitor" Nov 8 00:20:31.639150 containerd[1457]: time="2025-11-08T00:20:31.639127468Z" level=info msg="Start snapshots syncer" Nov 8 00:20:31.639150 containerd[1457]: time="2025-11-08T00:20:31.639136926Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:20:31.639150 containerd[1457]: time="2025-11-08T00:20:31.639144119Z" level=info msg="Start streaming server" Nov 8 00:20:31.639210 containerd[1457]: time="2025-11-08T00:20:31.639195115Z" level=info msg="containerd successfully booted in 0.037274s" Nov 8 00:20:31.641917 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:20:31.655244 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:20:31.680369 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:20:31.694034 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:20:31.702694 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:20:31.703010 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:20:31.706550 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:20:31.722860 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:20:31.737245 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:20:31.740713 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:20:31.743523 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:20:31.874455 tar[1456]: linux-amd64/README.md Nov 8 00:20:31.887426 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:20:32.924004 systemd-networkd[1388]: eth0: Gained IPv6LL Nov 8 00:20:32.927698 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:20:32.930912 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:20:32.944031 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 8 00:20:32.947683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:32.950858 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:20:32.972155 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 8 00:20:32.972425 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 8 00:20:32.975027 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:20:32.978129 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:20:33.764576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:33.767159 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:20:33.769675 systemd[1]: Startup finished in 2.254s (kernel) + 6.470s (initrd) + 4.991s (userspace) = 13.716s. Nov 8 00:20:33.769912 (kubelet)[1546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:20:34.438238 kubelet[1546]: E1108 00:20:34.438174 1546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:20:34.442783 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:20:34.443040 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:20:34.443382 systemd[1]: kubelet.service: Consumed 1.352s CPU time. Nov 8 00:20:35.790332 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:20:35.791531 systemd[1]: Started sshd@0-10.0.0.49:22-10.0.0.1:49614.service - OpenSSH per-connection server daemon (10.0.0.1:49614). Nov 8 00:20:35.833013 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 49614 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:20:35.835263 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:35.844961 systemd-logind[1447]: New session 1 of user core. Nov 8 00:20:35.846474 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:20:35.860099 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:20:35.872772 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:20:35.885042 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:20:35.888149 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:20:36.001535 systemd[1564]: Queued start job for default target default.target. Nov 8 00:20:36.019637 systemd[1564]: Created slice app.slice - User Application Slice. Nov 8 00:20:36.019670 systemd[1564]: Reached target paths.target - Paths. Nov 8 00:20:36.019685 systemd[1564]: Reached target timers.target - Timers. Nov 8 00:20:36.021475 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:20:36.033400 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:20:36.033577 systemd[1564]: Reached target sockets.target - Sockets. Nov 8 00:20:36.033605 systemd[1564]: Reached target basic.target - Basic System. Nov 8 00:20:36.033661 systemd[1564]: Reached target default.target - Main User Target. Nov 8 00:20:36.033700 systemd[1564]: Startup finished in 138ms. Nov 8 00:20:36.033944 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:20:36.035690 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:20:36.101949 systemd[1]: Started sshd@1-10.0.0.49:22-10.0.0.1:49622.service - OpenSSH per-connection server daemon (10.0.0.1:49622). Nov 8 00:20:36.134211 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 49622 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:20:36.135776 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:36.139850 systemd-logind[1447]: New session 2 of user core. Nov 8 00:20:36.156941 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:20:36.210285 sshd[1575]: pam_unix(sshd:session): session closed for user core Nov 8 00:20:36.217622 systemd[1]: sshd@1-10.0.0.49:22-10.0.0.1:49622.service: Deactivated successfully. Nov 8 00:20:36.219419 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:20:36.221015 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:20:36.234048 systemd[1]: Started sshd@2-10.0.0.49:22-10.0.0.1:49634.service - OpenSSH per-connection server daemon (10.0.0.1:49634). Nov 8 00:20:36.234992 systemd-logind[1447]: Removed session 2. Nov 8 00:20:36.261247 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 49634 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:20:36.262803 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:36.266890 systemd-logind[1447]: New session 3 of user core. Nov 8 00:20:36.276936 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:20:36.327537 sshd[1582]: pam_unix(sshd:session): session closed for user core Nov 8 00:20:36.343658 systemd[1]: sshd@2-10.0.0.49:22-10.0.0.1:49634.service: Deactivated successfully. Nov 8 00:20:36.345388 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:20:36.346988 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:20:36.357074 systemd[1]: Started sshd@3-10.0.0.49:22-10.0.0.1:49648.service - OpenSSH per-connection server daemon (10.0.0.1:49648). Nov 8 00:20:36.358081 systemd-logind[1447]: Removed session 3. Nov 8 00:20:36.385742 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 49648 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:20:36.387367 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:36.392114 systemd-logind[1447]: New session 4 of user core. Nov 8 00:20:36.403952 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:20:36.460210 sshd[1589]: pam_unix(sshd:session): session closed for user core Nov 8 00:20:36.480138 systemd[1]: sshd@3-10.0.0.49:22-10.0.0.1:49648.service: Deactivated successfully. Nov 8 00:20:36.482181 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:20:36.484386 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:20:36.485777 systemd[1]: Started sshd@4-10.0.0.49:22-10.0.0.1:49664.service - OpenSSH per-connection server daemon (10.0.0.1:49664). Nov 8 00:20:36.486790 systemd-logind[1447]: Removed session 4. Nov 8 00:20:36.517676 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 49664 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:20:36.519177 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:36.523393 systemd-logind[1447]: New session 5 of user core. Nov 8 00:20:36.532936 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:20:36.590646 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:20:36.591001 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:20:36.617872 sudo[1599]: pam_unix(sudo:session): session closed for user root Nov 8 00:20:36.620256 sshd[1596]: pam_unix(sshd:session): session closed for user core Nov 8 00:20:36.633927 systemd[1]: sshd@4-10.0.0.49:22-10.0.0.1:49664.service: Deactivated successfully. Nov 8 00:20:36.635658 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:20:36.637371 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:20:36.638739 systemd[1]: Started sshd@5-10.0.0.49:22-10.0.0.1:49666.service - OpenSSH per-connection server daemon (10.0.0.1:49666). Nov 8 00:20:36.639735 systemd-logind[1447]: Removed session 5. Nov 8 00:20:36.672093 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 49666 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:20:36.673890 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:36.677784 systemd-logind[1447]: New session 6 of user core. Nov 8 00:20:36.687952 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:20:36.742496 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:20:36.742851 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:20:36.746598 sudo[1608]: pam_unix(sudo:session): session closed for user root Nov 8 00:20:36.752873 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:20:36.753207 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:20:36.773047 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:20:36.775496 auditctl[1611]: No rules Nov 8 00:20:36.776791 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:20:36.777082 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:20:36.778932 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:20:36.814566 augenrules[1629]: No rules Nov 8 00:20:36.816601 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:20:36.818101 sudo[1607]: pam_unix(sudo:session): session closed for user root Nov 8 00:20:36.820284 sshd[1604]: pam_unix(sshd:session): session closed for user core Nov 8 00:20:36.834179 systemd[1]: sshd@5-10.0.0.49:22-10.0.0.1:49666.service: Deactivated successfully. Nov 8 00:20:36.836465 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:20:36.838458 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:20:36.849066 systemd[1]: Started sshd@6-10.0.0.49:22-10.0.0.1:49670.service - OpenSSH per-connection server daemon (10.0.0.1:49670). Nov 8 00:20:36.850104 systemd-logind[1447]: Removed session 6. Nov 8 00:20:36.877087 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 49670 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:20:36.878566 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:36.882716 systemd-logind[1447]: New session 7 of user core. Nov 8 00:20:36.897963 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:20:36.950974 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:20:36.951317 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:20:37.481020 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:20:37.481179 (dockerd)[1659]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:20:38.362268 dockerd[1659]: time="2025-11-08T00:20:38.362153851Z" level=info msg="Starting up" Nov 8 00:20:38.967304 dockerd[1659]: time="2025-11-08T00:20:38.967217094Z" level=info msg="Loading containers: start." Nov 8 00:20:39.190838 kernel: Initializing XFRM netlink socket Nov 8 00:20:39.273430 systemd-networkd[1388]: docker0: Link UP Nov 8 00:20:39.298209 dockerd[1659]: time="2025-11-08T00:20:39.298177215Z" level=info msg="Loading containers: done." Nov 8 00:20:39.320915 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2060141908-merged.mount: Deactivated successfully. Nov 8 00:20:39.321728 dockerd[1659]: time="2025-11-08T00:20:39.321639847Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:20:39.321846 dockerd[1659]: time="2025-11-08T00:20:39.321829022Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:20:39.321967 dockerd[1659]: time="2025-11-08T00:20:39.321953154Z" level=info msg="Daemon has completed initialization" Nov 8 00:20:39.360913 dockerd[1659]: time="2025-11-08T00:20:39.360826422Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:20:39.361102 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:20:40.310777 containerd[1457]: time="2025-11-08T00:20:40.310732335Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 8 00:20:40.832586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1792952489.mount: Deactivated successfully. Nov 8 00:20:41.820412 containerd[1457]: time="2025-11-08T00:20:41.820342293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:41.821270 containerd[1457]: time="2025-11-08T00:20:41.821243283Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 8 00:20:41.822585 containerd[1457]: time="2025-11-08T00:20:41.822538050Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:41.932526 containerd[1457]: time="2025-11-08T00:20:41.932464022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:41.936301 containerd[1457]: time="2025-11-08T00:20:41.934310844Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.6232215s" Nov 8 00:20:41.936301 containerd[1457]: time="2025-11-08T00:20:41.934373291Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 8 00:20:41.936802 containerd[1457]: time="2025-11-08T00:20:41.936749276Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 8 00:20:44.499634 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:20:44.597914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:45.051653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:45.056669 (kubelet)[1877]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:20:45.310225 kubelet[1877]: E1108 00:20:45.310034 1877 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:20:45.317029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:20:45.317225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:20:45.464344 containerd[1457]: time="2025-11-08T00:20:45.464290535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:45.465089 containerd[1457]: time="2025-11-08T00:20:45.465038157Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 8 00:20:45.466284 containerd[1457]: time="2025-11-08T00:20:45.466227688Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:45.468953 containerd[1457]: time="2025-11-08T00:20:45.468922861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:45.470048 containerd[1457]: time="2025-11-08T00:20:45.470020339Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 3.533215679s" Nov 8 00:20:45.470095 containerd[1457]: time="2025-11-08T00:20:45.470055224Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 8 00:20:45.470842 containerd[1457]: time="2025-11-08T00:20:45.470701586Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 8 00:20:48.468159 containerd[1457]: time="2025-11-08T00:20:48.468110397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:48.468854 containerd[1457]: time="2025-11-08T00:20:48.468824406Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 8 00:20:48.469996 containerd[1457]: time="2025-11-08T00:20:48.469955256Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:48.472599 containerd[1457]: time="2025-11-08T00:20:48.472566131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:48.473510 containerd[1457]: time="2025-11-08T00:20:48.473474184Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 3.002745217s" Nov 8 00:20:48.473510 containerd[1457]: time="2025-11-08T00:20:48.473500954Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 8 00:20:48.474825 containerd[1457]: time="2025-11-08T00:20:48.474647574Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 8 00:20:50.597695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2036765887.mount: Deactivated successfully. Nov 8 00:20:51.900501 containerd[1457]: time="2025-11-08T00:20:51.900442809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:51.969075 containerd[1457]: time="2025-11-08T00:20:51.968976590Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 8 00:20:52.095144 containerd[1457]: time="2025-11-08T00:20:52.095079503Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:52.147305 containerd[1457]: time="2025-11-08T00:20:52.147270611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:52.147797 containerd[1457]: time="2025-11-08T00:20:52.147750711Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 3.67306164s" Nov 8 00:20:52.147867 containerd[1457]: time="2025-11-08T00:20:52.147798040Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 8 00:20:52.148353 containerd[1457]: time="2025-11-08T00:20:52.148331330Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 8 00:20:52.754922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3001023966.mount: Deactivated successfully. Nov 8 00:20:54.422510 containerd[1457]: time="2025-11-08T00:20:54.422428852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:54.464962 containerd[1457]: time="2025-11-08T00:20:54.464014395Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 8 00:20:54.468587 containerd[1457]: time="2025-11-08T00:20:54.468501589Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:54.476327 containerd[1457]: time="2025-11-08T00:20:54.476273391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:54.477936 containerd[1457]: time="2025-11-08T00:20:54.477836341Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.329475005s" Nov 8 00:20:54.477936 containerd[1457]: time="2025-11-08T00:20:54.477922293Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 8 00:20:54.478677 containerd[1457]: time="2025-11-08T00:20:54.478584354Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:20:55.349999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:20:55.369418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:55.374994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3812548798.mount: Deactivated successfully. Nov 8 00:20:55.398316 containerd[1457]: time="2025-11-08T00:20:55.398228336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:55.401158 containerd[1457]: time="2025-11-08T00:20:55.399552389Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 8 00:20:55.405415 containerd[1457]: time="2025-11-08T00:20:55.404256178Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:55.411348 containerd[1457]: time="2025-11-08T00:20:55.411233431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:55.413543 containerd[1457]: time="2025-11-08T00:20:55.412215742Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 933.586745ms" Nov 8 00:20:55.413543 containerd[1457]: time="2025-11-08T00:20:55.412265546Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:20:55.413918 containerd[1457]: time="2025-11-08T00:20:55.413882428Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 8 00:20:55.718610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:55.724891 (kubelet)[1963]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:20:55.951566 kubelet[1963]: E1108 00:20:55.951498 1963 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:20:55.956145 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:20:55.956383 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:20:56.378632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2919494385.mount: Deactivated successfully. Nov 8 00:20:59.652285 containerd[1457]: time="2025-11-08T00:20:59.652214830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:59.652938 containerd[1457]: time="2025-11-08T00:20:59.652901798Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 8 00:20:59.654125 containerd[1457]: time="2025-11-08T00:20:59.654066602Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:59.657185 containerd[1457]: time="2025-11-08T00:20:59.657129665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:59.658379 containerd[1457]: time="2025-11-08T00:20:59.658333392Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.244419095s" Nov 8 00:20:59.658379 containerd[1457]: time="2025-11-08T00:20:59.658366183Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 8 00:21:03.508651 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:21:03.521022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:21:03.546878 systemd[1]: Reloading requested from client PID 2057 ('systemctl') (unit session-7.scope)... Nov 8 00:21:03.546894 systemd[1]: Reloading... Nov 8 00:21:03.624927 zram_generator::config[2099]: No configuration found. Nov 8 00:21:03.951672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:21:04.028860 systemd[1]: Reloading finished in 481 ms. Nov 8 00:21:04.079923 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:21:04.080036 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:21:04.080339 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:21:04.083111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:21:04.254479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:21:04.260504 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:21:04.397964 kubelet[2145]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:21:04.397964 kubelet[2145]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:21:04.397964 kubelet[2145]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:21:04.398387 kubelet[2145]: I1108 00:21:04.397998 2145 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:21:05.405539 kubelet[2145]: I1108 00:21:05.405478 2145 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 8 00:21:05.405539 kubelet[2145]: I1108 00:21:05.405523 2145 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:21:05.406028 kubelet[2145]: I1108 00:21:05.405792 2145 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:21:05.471697 kubelet[2145]: E1108 00:21:05.471645 2145 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:21:05.472064 kubelet[2145]: I1108 00:21:05.472036 2145 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:21:05.477509 kubelet[2145]: E1108 00:21:05.477482 2145 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:21:05.477509 kubelet[2145]: I1108 00:21:05.477507 2145 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:21:05.483511 kubelet[2145]: I1108 00:21:05.483481 2145 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:21:05.483828 kubelet[2145]: I1108 00:21:05.483777 2145 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:21:05.484024 kubelet[2145]: I1108 00:21:05.483801 2145 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:21:05.484138 kubelet[2145]: I1108 00:21:05.484027 2145 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:21:05.484138 kubelet[2145]: I1108 00:21:05.484040 2145 container_manager_linux.go:303] "Creating device plugin manager" Nov 8 00:21:05.485607 kubelet[2145]: I1108 00:21:05.485571 2145 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:21:05.488170 kubelet[2145]: I1108 00:21:05.488131 2145 kubelet.go:480] "Attempting to sync node with API server" Nov 8 00:21:05.488170 kubelet[2145]: I1108 00:21:05.488156 2145 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:21:05.488247 kubelet[2145]: I1108 00:21:05.488187 2145 kubelet.go:386] "Adding apiserver pod source" Nov 8 00:21:05.488247 kubelet[2145]: I1108 00:21:05.488206 2145 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:21:05.495577 kubelet[2145]: I1108 00:21:05.495543 2145 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:21:05.496202 kubelet[2145]: I1108 00:21:05.496176 2145 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:21:05.496202 kubelet[2145]: E1108 00:21:05.496183 2145 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:21:05.496470 kubelet[2145]: E1108 00:21:05.496438 2145 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:21:05.496787 kubelet[2145]: W1108 00:21:05.496760 2145 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:21:05.500207 kubelet[2145]: I1108 00:21:05.500182 2145 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:21:05.500456 kubelet[2145]: I1108 00:21:05.500292 2145 server.go:1289] "Started kubelet" Nov 8 00:21:05.500578 kubelet[2145]: I1108 00:21:05.500512 2145 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:21:05.501506 kubelet[2145]: I1108 00:21:05.501480 2145 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:21:05.501582 kubelet[2145]: I1108 00:21:05.501473 2145 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:21:05.502969 kubelet[2145]: I1108 00:21:05.502946 2145 server.go:317] "Adding debug handlers to kubelet server" Nov 8 00:21:05.503691 kubelet[2145]: I1108 00:21:05.503674 2145 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:21:05.504232 kubelet[2145]: I1108 00:21:05.504209 2145 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:21:05.506535 kubelet[2145]: E1108 00:21:05.506078 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:05.506535 kubelet[2145]: I1108 00:21:05.506122 2145 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:21:05.506535 kubelet[2145]: I1108 00:21:05.506213 2145 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:21:05.506535 kubelet[2145]: I1108 00:21:05.506274 2145 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:21:05.506535 kubelet[2145]: E1108 00:21:05.506470 2145 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:21:05.507157 kubelet[2145]: I1108 00:21:05.507139 2145 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:21:05.507358 kubelet[2145]: I1108 00:21:05.507307 2145 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:21:05.507636 kubelet[2145]: E1108 00:21:05.507613 2145 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="200ms" Nov 8 00:21:05.508085 kubelet[2145]: E1108 00:21:05.508016 2145 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:21:05.508835 kubelet[2145]: E1108 00:21:05.506801 2145 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.49:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.49:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875e021ed00b6f2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-08 00:21:05.500206834 +0000 UTC m=+1.221177719,LastTimestamp:2025-11-08 00:21:05.500206834 +0000 UTC m=+1.221177719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 8 00:21:05.508835 kubelet[2145]: I1108 00:21:05.508469 2145 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:21:05.526259 kubelet[2145]: I1108 00:21:05.526237 2145 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:21:05.526259 kubelet[2145]: I1108 00:21:05.526254 2145 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:21:05.526377 kubelet[2145]: I1108 00:21:05.526336 2145 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 8 00:21:05.526584 kubelet[2145]: I1108 00:21:05.526492 2145 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:21:05.528580 kubelet[2145]: I1108 00:21:05.528559 2145 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 8 00:21:05.528616 kubelet[2145]: I1108 00:21:05.528585 2145 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 8 00:21:05.528616 kubelet[2145]: I1108 00:21:05.528608 2145 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:21:05.528687 kubelet[2145]: I1108 00:21:05.528618 2145 kubelet.go:2436] "Starting kubelet main sync loop" Nov 8 00:21:05.528718 kubelet[2145]: E1108 00:21:05.528673 2145 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:21:05.529377 kubelet[2145]: E1108 00:21:05.529328 2145 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:21:05.571316 kubelet[2145]: I1108 00:21:05.571269 2145 policy_none.go:49] "None policy: Start" Nov 8 00:21:05.571316 kubelet[2145]: I1108 00:21:05.571298 2145 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:21:05.571316 kubelet[2145]: I1108 00:21:05.571314 2145 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:21:05.607184 kubelet[2145]: E1108 00:21:05.607140 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:05.630065 kubelet[2145]: E1108 00:21:05.629154 2145 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:21:05.630209 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:21:05.646315 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:21:05.649538 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:21:05.663922 kubelet[2145]: E1108 00:21:05.662947 2145 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:21:05.663922 kubelet[2145]: I1108 00:21:05.663222 2145 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:21:05.663922 kubelet[2145]: I1108 00:21:05.663233 2145 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:21:05.663922 kubelet[2145]: I1108 00:21:05.663766 2145 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:21:05.664592 kubelet[2145]: E1108 00:21:05.664526 2145 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:21:05.664592 kubelet[2145]: E1108 00:21:05.664592 2145 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 8 00:21:05.708500 kubelet[2145]: E1108 00:21:05.708459 2145 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="400ms" Nov 8 00:21:05.764683 kubelet[2145]: I1108 00:21:05.764632 2145 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:21:05.765050 kubelet[2145]: E1108 00:21:05.765021 2145 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Nov 8 00:21:05.840223 systemd[1]: Created slice kubepods-burstable-pod2e4ec4ad2a3ea094517b055b3e33efc9.slice - libcontainer container kubepods-burstable-pod2e4ec4ad2a3ea094517b055b3e33efc9.slice. Nov 8 00:21:05.853583 kubelet[2145]: E1108 00:21:05.853546 2145 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:05.909953 kubelet[2145]: I1108 00:21:05.909921 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:05.910016 kubelet[2145]: I1108 00:21:05.909958 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:05.910016 kubelet[2145]: I1108 00:21:05.909985 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:05.910016 kubelet[2145]: I1108 00:21:05.910010 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e4ec4ad2a3ea094517b055b3e33efc9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2e4ec4ad2a3ea094517b055b3e33efc9\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:05.910084 kubelet[2145]: I1108 00:21:05.910054 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e4ec4ad2a3ea094517b055b3e33efc9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2e4ec4ad2a3ea094517b055b3e33efc9\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:05.910115 kubelet[2145]: I1108 00:21:05.910089 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e4ec4ad2a3ea094517b055b3e33efc9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2e4ec4ad2a3ea094517b055b3e33efc9\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:05.910137 kubelet[2145]: I1108 00:21:05.910113 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:05.910163 kubelet[2145]: I1108 00:21:05.910138 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:05.944606 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 8 00:21:05.946374 kubelet[2145]: E1108 00:21:05.946344 2145 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:05.966821 kubelet[2145]: I1108 00:21:05.966774 2145 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:21:05.967215 kubelet[2145]: E1108 00:21:05.967170 2145 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Nov 8 00:21:06.011097 kubelet[2145]: I1108 00:21:06.011047 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:21:06.038753 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 8 00:21:06.040717 kubelet[2145]: E1108 00:21:06.040672 2145 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:06.109449 kubelet[2145]: E1108 00:21:06.109392 2145 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="800ms" Nov 8 00:21:06.155046 kubelet[2145]: E1108 00:21:06.154989 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:06.155952 containerd[1457]: time="2025-11-08T00:21:06.155903183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2e4ec4ad2a3ea094517b055b3e33efc9,Namespace:kube-system,Attempt:0,}" Nov 8 00:21:06.247570 kubelet[2145]: E1108 00:21:06.247427 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:06.248294 containerd[1457]: time="2025-11-08T00:21:06.248255495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 8 00:21:06.342149 kubelet[2145]: E1108 00:21:06.342115 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:06.342748 containerd[1457]: time="2025-11-08T00:21:06.342703419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 8 00:21:06.368787 kubelet[2145]: I1108 00:21:06.368754 2145 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:21:06.369163 kubelet[2145]: E1108 00:21:06.369125 2145 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Nov 8 00:21:06.558973 kubelet[2145]: E1108 00:21:06.558931 2145 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:21:06.619291 kubelet[2145]: E1108 00:21:06.619236 2145 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:21:06.669025 kubelet[2145]: E1108 00:21:06.668979 2145 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:21:06.761303 kubelet[2145]: E1108 00:21:06.761238 2145 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:21:06.910630 kubelet[2145]: E1108 00:21:06.910523 2145 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="1.6s" Nov 8 00:21:07.095762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount883883242.mount: Deactivated successfully. Nov 8 00:21:07.104914 containerd[1457]: time="2025-11-08T00:21:07.104871694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:21:07.105838 containerd[1457]: time="2025-11-08T00:21:07.105793283Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:21:07.106634 containerd[1457]: time="2025-11-08T00:21:07.106587309Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:21:07.107514 containerd[1457]: time="2025-11-08T00:21:07.107466957Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:21:07.108365 containerd[1457]: time="2025-11-08T00:21:07.108330465Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:21:07.109218 containerd[1457]: time="2025-11-08T00:21:07.109195756Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:21:07.110015 containerd[1457]: time="2025-11-08T00:21:07.109988931Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:21:07.113830 containerd[1457]: time="2025-11-08T00:21:07.113791397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:21:07.114637 containerd[1457]: time="2025-11-08T00:21:07.114604498Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 866.264574ms" Nov 8 00:21:07.115797 containerd[1457]: time="2025-11-08T00:21:07.115763450Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 772.984387ms" Nov 8 00:21:07.116892 containerd[1457]: time="2025-11-08T00:21:07.116864672Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 960.866867ms" Nov 8 00:21:07.171278 kubelet[2145]: I1108 00:21:07.171164 2145 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:21:07.171608 kubelet[2145]: E1108 00:21:07.171525 2145 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Nov 8 00:21:07.253095 containerd[1457]: time="2025-11-08T00:21:07.252836500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:07.253095 containerd[1457]: time="2025-11-08T00:21:07.252902747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:07.253095 containerd[1457]: time="2025-11-08T00:21:07.252931341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:07.253095 containerd[1457]: time="2025-11-08T00:21:07.253014600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:07.257861 containerd[1457]: time="2025-11-08T00:21:07.257767490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:07.257861 containerd[1457]: time="2025-11-08T00:21:07.257830610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:07.257861 containerd[1457]: time="2025-11-08T00:21:07.257842514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:07.258029 containerd[1457]: time="2025-11-08T00:21:07.257925963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:07.261642 containerd[1457]: time="2025-11-08T00:21:07.261467612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:07.261642 containerd[1457]: time="2025-11-08T00:21:07.261503440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:07.261642 containerd[1457]: time="2025-11-08T00:21:07.261514431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:07.261642 containerd[1457]: time="2025-11-08T00:21:07.261572653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:07.279942 systemd[1]: Started cri-containerd-efff3d072bd8fc931b87032422bf060af6600e645b5d6b939fffa2904dffa49c.scope - libcontainer container efff3d072bd8fc931b87032422bf060af6600e645b5d6b939fffa2904dffa49c. Nov 8 00:21:07.283403 systemd[1]: Started cri-containerd-4dc44d2eec412afc89a92e0201c39dc6412793941a00e9da166ed907177a9748.scope - libcontainer container 4dc44d2eec412afc89a92e0201c39dc6412793941a00e9da166ed907177a9748. Nov 8 00:21:07.286108 systemd[1]: Started cri-containerd-5c104f810261903272b61e331a0cb372e41157b8cad51402de71bf1911251cb3.scope - libcontainer container 5c104f810261903272b61e331a0cb372e41157b8cad51402de71bf1911251cb3. Nov 8 00:21:07.323785 containerd[1457]: time="2025-11-08T00:21:07.323680639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2e4ec4ad2a3ea094517b055b3e33efc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"efff3d072bd8fc931b87032422bf060af6600e645b5d6b939fffa2904dffa49c\"" Nov 8 00:21:07.325106 kubelet[2145]: E1108 00:21:07.325075 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:07.332274 containerd[1457]: time="2025-11-08T00:21:07.332206670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dc44d2eec412afc89a92e0201c39dc6412793941a00e9da166ed907177a9748\"" Nov 8 00:21:07.333122 kubelet[2145]: E1108 00:21:07.333071 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:07.335063 containerd[1457]: time="2025-11-08T00:21:07.335021642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c104f810261903272b61e331a0cb372e41157b8cad51402de71bf1911251cb3\"" Nov 8 00:21:07.335898 kubelet[2145]: E1108 00:21:07.335882 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:07.494065 containerd[1457]: time="2025-11-08T00:21:07.493984818Z" level=info msg="CreateContainer within sandbox \"efff3d072bd8fc931b87032422bf060af6600e645b5d6b939fffa2904dffa49c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:21:07.509686 kubelet[2145]: E1108 00:21:07.509663 2145 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:21:07.535236 containerd[1457]: time="2025-11-08T00:21:07.535189722Z" level=info msg="CreateContainer within sandbox \"4dc44d2eec412afc89a92e0201c39dc6412793941a00e9da166ed907177a9748\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:21:07.662038 containerd[1457]: time="2025-11-08T00:21:07.661991110Z" level=info msg="CreateContainer within sandbox \"5c104f810261903272b61e331a0cb372e41157b8cad51402de71bf1911251cb3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:21:08.511882 kubelet[2145]: E1108 00:21:08.511831 2145 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="3.2s" Nov 8 00:21:08.598679 kubelet[2145]: E1108 00:21:08.598647 2145 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:21:08.773098 kubelet[2145]: I1108 00:21:08.772969 2145 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:21:08.773325 kubelet[2145]: E1108 00:21:08.773304 2145 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Nov 8 00:21:08.866309 kubelet[2145]: E1108 00:21:08.866257 2145 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:21:08.967313 kubelet[2145]: E1108 00:21:08.967273 2145 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:21:09.021663 kubelet[2145]: E1108 00:21:09.021544 2145 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.49:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.49:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875e021ed00b6f2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-08 00:21:05.500206834 +0000 UTC m=+1.221177719,LastTimestamp:2025-11-08 00:21:05.500206834 +0000 UTC m=+1.221177719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 8 00:21:09.231384 kubelet[2145]: E1108 00:21:09.231261 2145 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:21:09.404151 containerd[1457]: time="2025-11-08T00:21:09.404109096Z" level=info msg="CreateContainer within sandbox \"5c104f810261903272b61e331a0cb372e41157b8cad51402de71bf1911251cb3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2e1e09fa0920474713702c04f92bb55df316e59860ab03f42259b2ac97e0c8ea\"" Nov 8 00:21:09.404987 containerd[1457]: time="2025-11-08T00:21:09.404953975Z" level=info msg="StartContainer for \"2e1e09fa0920474713702c04f92bb55df316e59860ab03f42259b2ac97e0c8ea\"" Nov 8 00:21:09.430937 systemd[1]: Started cri-containerd-2e1e09fa0920474713702c04f92bb55df316e59860ab03f42259b2ac97e0c8ea.scope - libcontainer container 2e1e09fa0920474713702c04f92bb55df316e59860ab03f42259b2ac97e0c8ea. Nov 8 00:21:09.544263 containerd[1457]: time="2025-11-08T00:21:09.544135135Z" level=info msg="CreateContainer within sandbox \"4dc44d2eec412afc89a92e0201c39dc6412793941a00e9da166ed907177a9748\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"52d0f1cc1d8d908c817902ae34811056dfd856e7737d91b0af71dc012d6572aa\"" Nov 8 00:21:09.544263 containerd[1457]: time="2025-11-08T00:21:09.544143892Z" level=info msg="StartContainer for \"2e1e09fa0920474713702c04f92bb55df316e59860ab03f42259b2ac97e0c8ea\" returns successfully" Nov 8 00:21:09.545166 containerd[1457]: time="2025-11-08T00:21:09.545093490Z" level=info msg="StartContainer for \"52d0f1cc1d8d908c817902ae34811056dfd856e7737d91b0af71dc012d6572aa\"" Nov 8 00:21:09.548948 kubelet[2145]: E1108 00:21:09.548703 2145 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:09.548948 kubelet[2145]: E1108 00:21:09.548851 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:09.549270 containerd[1457]: time="2025-11-08T00:21:09.548779981Z" level=info msg="CreateContainer within sandbox \"efff3d072bd8fc931b87032422bf060af6600e645b5d6b939fffa2904dffa49c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"531ee8c4cc942cd3c2e8273c1eecf08b65d44b04e7358c6d169c1b8fe3fe202a\"" Nov 8 00:21:09.549270 containerd[1457]: time="2025-11-08T00:21:09.549098658Z" level=info msg="StartContainer for \"531ee8c4cc942cd3c2e8273c1eecf08b65d44b04e7358c6d169c1b8fe3fe202a\"" Nov 8 00:21:09.698041 systemd[1]: Started cri-containerd-531ee8c4cc942cd3c2e8273c1eecf08b65d44b04e7358c6d169c1b8fe3fe202a.scope - libcontainer container 531ee8c4cc942cd3c2e8273c1eecf08b65d44b04e7358c6d169c1b8fe3fe202a. Nov 8 00:21:09.701853 systemd[1]: Started cri-containerd-52d0f1cc1d8d908c817902ae34811056dfd856e7737d91b0af71dc012d6572aa.scope - libcontainer container 52d0f1cc1d8d908c817902ae34811056dfd856e7737d91b0af71dc012d6572aa. Nov 8 00:21:09.744183 containerd[1457]: time="2025-11-08T00:21:09.744121890Z" level=info msg="StartContainer for \"531ee8c4cc942cd3c2e8273c1eecf08b65d44b04e7358c6d169c1b8fe3fe202a\" returns successfully" Nov 8 00:21:09.752955 containerd[1457]: time="2025-11-08T00:21:09.752908474Z" level=info msg="StartContainer for \"52d0f1cc1d8d908c817902ae34811056dfd856e7737d91b0af71dc012d6572aa\" returns successfully" Nov 8 00:21:10.552604 kubelet[2145]: E1108 00:21:10.552569 2145 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:10.553132 kubelet[2145]: E1108 00:21:10.552736 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:10.553436 kubelet[2145]: E1108 00:21:10.553414 2145 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:10.553680 kubelet[2145]: E1108 00:21:10.553656 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:10.553724 kubelet[2145]: E1108 00:21:10.553653 2145 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:10.553850 kubelet[2145]: E1108 00:21:10.553836 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:11.351042 kubelet[2145]: E1108 00:21:11.350992 2145 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 8 00:21:11.553661 kubelet[2145]: E1108 00:21:11.553634 2145 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:11.554092 kubelet[2145]: E1108 00:21:11.553769 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:11.554092 kubelet[2145]: E1108 00:21:11.553923 2145 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:11.554092 kubelet[2145]: E1108 00:21:11.554070 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:11.710884 kubelet[2145]: E1108 00:21:11.710723 2145 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 8 00:21:11.715116 kubelet[2145]: E1108 00:21:11.715092 2145 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 8 00:21:11.728579 kubelet[2145]: E1108 00:21:11.728560 2145 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:11.728695 kubelet[2145]: E1108 00:21:11.728683 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:11.975298 kubelet[2145]: I1108 00:21:11.975027 2145 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:21:11.982601 kubelet[2145]: I1108 00:21:11.982576 2145 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:21:11.982601 kubelet[2145]: E1108 00:21:11.982605 2145 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 8 00:21:11.990082 kubelet[2145]: E1108 00:21:11.989911 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:12.090681 kubelet[2145]: E1108 00:21:12.090631 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:12.191361 kubelet[2145]: E1108 00:21:12.191329 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:12.291885 kubelet[2145]: E1108 00:21:12.291858 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:12.392289 kubelet[2145]: E1108 00:21:12.392244 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:12.493191 kubelet[2145]: E1108 00:21:12.493137 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:12.555709 kubelet[2145]: E1108 00:21:12.555606 2145 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:12.556083 kubelet[2145]: E1108 00:21:12.555768 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:12.556083 kubelet[2145]: E1108 00:21:12.556040 2145 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:12.556227 kubelet[2145]: E1108 00:21:12.556197 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:12.593522 kubelet[2145]: E1108 00:21:12.593472 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:12.694162 kubelet[2145]: E1108 00:21:12.694123 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:12.794696 kubelet[2145]: E1108 00:21:12.794642 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:12.895278 kubelet[2145]: E1108 00:21:12.895153 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:12.995971 kubelet[2145]: E1108 00:21:12.995925 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:13.096580 kubelet[2145]: E1108 00:21:13.096557 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:13.197261 kubelet[2145]: E1108 00:21:13.197122 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:13.298221 kubelet[2145]: E1108 00:21:13.298164 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:13.398736 kubelet[2145]: E1108 00:21:13.398685 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:13.499030 kubelet[2145]: E1108 00:21:13.498992 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:13.557627 kubelet[2145]: E1108 00:21:13.557586 2145 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:21:13.558062 kubelet[2145]: E1108 00:21:13.557791 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:13.599797 kubelet[2145]: E1108 00:21:13.599728 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:13.700483 kubelet[2145]: E1108 00:21:13.700437 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:13.801122 kubelet[2145]: E1108 00:21:13.800988 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:13.901591 kubelet[2145]: E1108 00:21:13.901526 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:14.007102 kubelet[2145]: I1108 00:21:14.007053 2145 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:21:14.069175 kubelet[2145]: I1108 00:21:14.069033 2145 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:14.077093 kubelet[2145]: I1108 00:21:14.077055 2145 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:14.204641 systemd[1]: Reloading requested from client PID 2433 ('systemctl') (unit session-7.scope)... Nov 8 00:21:14.204659 systemd[1]: Reloading... Nov 8 00:21:14.315859 zram_generator::config[2479]: No configuration found. Nov 8 00:21:14.428215 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:21:14.497549 kubelet[2145]: I1108 00:21:14.497196 2145 apiserver.go:52] "Watching apiserver" Nov 8 00:21:14.499731 kubelet[2145]: E1108 00:21:14.499696 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:14.500086 kubelet[2145]: E1108 00:21:14.500051 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:14.506461 kubelet[2145]: I1108 00:21:14.506429 2145 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:21:14.522778 systemd[1]: Reloading finished in 317 ms. Nov 8 00:21:14.558433 kubelet[2145]: E1108 00:21:14.558402 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:14.564261 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:21:14.584854 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:21:14.585161 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:21:14.585224 systemd[1]: kubelet.service: Consumed 1.373s CPU time, 135.1M memory peak, 0B memory swap peak. Nov 8 00:21:14.594374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:21:14.762581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:21:14.767255 (kubelet)[2518]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:21:14.798215 kubelet[2518]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:21:14.798215 kubelet[2518]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:21:14.798215 kubelet[2518]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:21:14.798601 kubelet[2518]: I1108 00:21:14.798264 2518 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:21:14.804833 kubelet[2518]: I1108 00:21:14.804782 2518 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 8 00:21:14.804833 kubelet[2518]: I1108 00:21:14.804802 2518 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:21:14.804997 kubelet[2518]: I1108 00:21:14.804978 2518 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:21:14.806053 kubelet[2518]: I1108 00:21:14.806032 2518 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 8 00:21:14.808551 kubelet[2518]: I1108 00:21:14.807918 2518 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:21:14.810922 kubelet[2518]: E1108 00:21:14.810885 2518 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:21:14.810922 kubelet[2518]: I1108 00:21:14.810912 2518 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:21:14.815654 kubelet[2518]: I1108 00:21:14.815623 2518 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:21:14.815868 kubelet[2518]: I1108 00:21:14.815834 2518 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:21:14.815981 kubelet[2518]: I1108 00:21:14.815855 2518 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:21:14.816065 kubelet[2518]: I1108 00:21:14.815985 2518 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:21:14.816065 kubelet[2518]: I1108 00:21:14.815994 2518 container_manager_linux.go:303] "Creating device plugin manager" Nov 8 00:21:14.816065 kubelet[2518]: I1108 00:21:14.816038 2518 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:21:14.816192 kubelet[2518]: I1108 00:21:14.816183 2518 kubelet.go:480] "Attempting to sync node with API server" Nov 8 00:21:14.816228 kubelet[2518]: I1108 00:21:14.816194 2518 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:21:14.816228 kubelet[2518]: I1108 00:21:14.816213 2518 kubelet.go:386] "Adding apiserver pod source" Nov 8 00:21:14.816228 kubelet[2518]: I1108 00:21:14.816229 2518 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:21:14.817193 kubelet[2518]: I1108 00:21:14.817044 2518 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:21:14.817469 kubelet[2518]: I1108 00:21:14.817454 2518 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:21:14.819911 kubelet[2518]: I1108 00:21:14.819876 2518 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:21:14.819993 kubelet[2518]: I1108 00:21:14.819916 2518 server.go:1289] "Started kubelet" Nov 8 00:21:14.822879 kubelet[2518]: I1108 00:21:14.820745 2518 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:21:14.822879 kubelet[2518]: I1108 00:21:14.821132 2518 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:21:14.822879 kubelet[2518]: I1108 00:21:14.821216 2518 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:21:14.822879 kubelet[2518]: I1108 00:21:14.821632 2518 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:21:14.822879 kubelet[2518]: I1108 00:21:14.822203 2518 server.go:317] "Adding debug handlers to kubelet server" Nov 8 00:21:14.823454 kubelet[2518]: E1108 00:21:14.823438 2518 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:21:14.823692 kubelet[2518]: I1108 00:21:14.823664 2518 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:21:14.824008 kubelet[2518]: I1108 00:21:14.823956 2518 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:21:14.824214 kubelet[2518]: I1108 00:21:14.824130 2518 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:21:14.824452 kubelet[2518]: I1108 00:21:14.824437 2518 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:21:14.828998 kubelet[2518]: I1108 00:21:14.828972 2518 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:21:14.829110 kubelet[2518]: I1108 00:21:14.829089 2518 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:21:14.830545 kubelet[2518]: E1108 00:21:14.830499 2518 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:21:14.833121 kubelet[2518]: I1108 00:21:14.833077 2518 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:21:14.833689 kubelet[2518]: I1108 00:21:14.833666 2518 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 8 00:21:14.844555 kubelet[2518]: I1108 00:21:14.842992 2518 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 8 00:21:14.844555 kubelet[2518]: I1108 00:21:14.843021 2518 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 8 00:21:14.844555 kubelet[2518]: I1108 00:21:14.843043 2518 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:21:14.844555 kubelet[2518]: I1108 00:21:14.843049 2518 kubelet.go:2436] "Starting kubelet main sync loop" Nov 8 00:21:14.844555 kubelet[2518]: E1108 00:21:14.843090 2518 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:21:14.864510 kubelet[2518]: I1108 00:21:14.864483 2518 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:21:14.864510 kubelet[2518]: I1108 00:21:14.864500 2518 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:21:14.864510 kubelet[2518]: I1108 00:21:14.864519 2518 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:21:14.864720 kubelet[2518]: I1108 00:21:14.864668 2518 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:21:14.864720 kubelet[2518]: I1108 00:21:14.864699 2518 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:21:14.864720 kubelet[2518]: I1108 00:21:14.864720 2518 policy_none.go:49] "None policy: Start" Nov 8 00:21:14.864828 kubelet[2518]: I1108 00:21:14.864730 2518 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:21:14.864828 kubelet[2518]: I1108 00:21:14.864743 2518 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:21:14.864896 kubelet[2518]: I1108 00:21:14.864883 2518 state_mem.go:75] "Updated machine memory state" Nov 8 00:21:14.868831 kubelet[2518]: E1108 00:21:14.868589 2518 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:21:14.868831 kubelet[2518]: I1108 00:21:14.868780 2518 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:21:14.868831 kubelet[2518]: I1108 00:21:14.868793 2518 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:21:14.869038 kubelet[2518]: I1108 00:21:14.869024 2518 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:21:14.869829 kubelet[2518]: E1108 00:21:14.869800 2518 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:21:14.944202 kubelet[2518]: I1108 00:21:14.944154 2518 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:14.944334 kubelet[2518]: I1108 00:21:14.944305 2518 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:21:14.944334 kubelet[2518]: I1108 00:21:14.944173 2518 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:14.973895 kubelet[2518]: I1108 00:21:14.973867 2518 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:21:15.026596 kubelet[2518]: I1108 00:21:15.026463 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e4ec4ad2a3ea094517b055b3e33efc9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2e4ec4ad2a3ea094517b055b3e33efc9\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:15.026596 kubelet[2518]: I1108 00:21:15.026503 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e4ec4ad2a3ea094517b055b3e33efc9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2e4ec4ad2a3ea094517b055b3e33efc9\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:15.026596 kubelet[2518]: I1108 00:21:15.026534 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:15.026596 kubelet[2518]: I1108 00:21:15.026579 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:15.026839 kubelet[2518]: I1108 00:21:15.026608 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:21:15.026839 kubelet[2518]: I1108 00:21:15.026624 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e4ec4ad2a3ea094517b055b3e33efc9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2e4ec4ad2a3ea094517b055b3e33efc9\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:15.026839 kubelet[2518]: I1108 00:21:15.026660 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:15.026839 kubelet[2518]: I1108 00:21:15.026686 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:15.026839 kubelet[2518]: I1108 00:21:15.026702 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:15.252329 kubelet[2518]: E1108 00:21:15.252141 2518 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:15.252329 kubelet[2518]: E1108 00:21:15.252198 2518 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 8 00:21:15.252483 kubelet[2518]: E1108 00:21:15.252359 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:15.252483 kubelet[2518]: E1108 00:21:15.252164 2518 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:15.252483 kubelet[2518]: E1108 00:21:15.252361 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:15.252483 kubelet[2518]: E1108 00:21:15.252475 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:15.253432 kubelet[2518]: I1108 00:21:15.253090 2518 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 8 00:21:15.253432 kubelet[2518]: I1108 00:21:15.253144 2518 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:21:15.817393 kubelet[2518]: I1108 00:21:15.817333 2518 apiserver.go:52] "Watching apiserver" Nov 8 00:21:15.824478 kubelet[2518]: I1108 00:21:15.824456 2518 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:21:15.852194 kubelet[2518]: I1108 00:21:15.852164 2518 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:21:15.852397 kubelet[2518]: I1108 00:21:15.852379 2518 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:15.852501 kubelet[2518]: I1108 00:21:15.852484 2518 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:15.861093 kubelet[2518]: E1108 00:21:15.859994 2518 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 8 00:21:15.861093 kubelet[2518]: E1108 00:21:15.860013 2518 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:21:15.861093 kubelet[2518]: E1108 00:21:15.860156 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:15.861093 kubelet[2518]: E1108 00:21:15.859994 2518 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 8 00:21:15.861093 kubelet[2518]: E1108 00:21:15.860176 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:15.861093 kubelet[2518]: E1108 00:21:15.860292 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:15.870821 kubelet[2518]: I1108 00:21:15.870764 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.87072434 podStartE2EDuration="1.87072434s" podCreationTimestamp="2025-11-08 00:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:21:15.870503101 +0000 UTC m=+1.098308974" watchObservedRunningTime="2025-11-08 00:21:15.87072434 +0000 UTC m=+1.098530213" Nov 8 00:21:15.876568 kubelet[2518]: I1108 00:21:15.876505 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8764898749999999 podStartE2EDuration="1.876489875s" podCreationTimestamp="2025-11-08 00:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:21:15.876429801 +0000 UTC m=+1.104235684" watchObservedRunningTime="2025-11-08 00:21:15.876489875 +0000 UTC m=+1.104295748" Nov 8 00:21:15.888679 kubelet[2518]: I1108 00:21:15.888625 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8886095809999999 podStartE2EDuration="1.888609581s" podCreationTimestamp="2025-11-08 00:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:21:15.882367884 +0000 UTC m=+1.110173757" watchObservedRunningTime="2025-11-08 00:21:15.888609581 +0000 UTC m=+1.116415444" Nov 8 00:21:16.853564 kubelet[2518]: E1108 00:21:16.853525 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:16.854224 kubelet[2518]: E1108 00:21:16.853673 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:16.854224 kubelet[2518]: E1108 00:21:16.853980 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:17.055829 update_engine[1449]: I20251108 00:21:17.055708 1449 update_attempter.cc:509] Updating boot flags... Nov 8 00:21:17.098827 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2580) Nov 8 00:21:17.127841 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2579) Nov 8 00:21:17.856516 kubelet[2518]: E1108 00:21:17.856485 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:18.857096 kubelet[2518]: E1108 00:21:18.857063 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:19.037083 kubelet[2518]: I1108 00:21:19.037046 2518 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:21:19.037499 containerd[1457]: time="2025-11-08T00:21:19.037448825Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:21:19.037876 kubelet[2518]: I1108 00:21:19.037633 2518 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:21:19.608358 systemd[1]: Created slice kubepods-besteffort-pod9b79670c_af84_4e02_9cf2_f79b84fda98d.slice - libcontainer container kubepods-besteffort-pod9b79670c_af84_4e02_9cf2_f79b84fda98d.slice. Nov 8 00:21:19.656597 kubelet[2518]: I1108 00:21:19.656550 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b79670c-af84-4e02-9cf2-f79b84fda98d-kube-proxy\") pod \"kube-proxy-pnnwh\" (UID: \"9b79670c-af84-4e02-9cf2-f79b84fda98d\") " pod="kube-system/kube-proxy-pnnwh" Nov 8 00:21:19.656597 kubelet[2518]: I1108 00:21:19.656591 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct97z\" (UniqueName: \"kubernetes.io/projected/9b79670c-af84-4e02-9cf2-f79b84fda98d-kube-api-access-ct97z\") pod \"kube-proxy-pnnwh\" (UID: \"9b79670c-af84-4e02-9cf2-f79b84fda98d\") " pod="kube-system/kube-proxy-pnnwh" Nov 8 00:21:19.656597 kubelet[2518]: I1108 00:21:19.656619 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b79670c-af84-4e02-9cf2-f79b84fda98d-xtables-lock\") pod \"kube-proxy-pnnwh\" (UID: \"9b79670c-af84-4e02-9cf2-f79b84fda98d\") " pod="kube-system/kube-proxy-pnnwh" Nov 8 00:21:19.656824 kubelet[2518]: I1108 00:21:19.656639 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b79670c-af84-4e02-9cf2-f79b84fda98d-lib-modules\") pod \"kube-proxy-pnnwh\" (UID: \"9b79670c-af84-4e02-9cf2-f79b84fda98d\") " pod="kube-system/kube-proxy-pnnwh" Nov 8 00:21:19.762159 kubelet[2518]: E1108 00:21:19.761945 2518 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 8 00:21:19.762159 kubelet[2518]: E1108 00:21:19.761977 2518 projected.go:194] Error preparing data for projected volume kube-api-access-ct97z for pod kube-system/kube-proxy-pnnwh: configmap "kube-root-ca.crt" not found Nov 8 00:21:19.762159 kubelet[2518]: E1108 00:21:19.762031 2518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9b79670c-af84-4e02-9cf2-f79b84fda98d-kube-api-access-ct97z podName:9b79670c-af84-4e02-9cf2-f79b84fda98d nodeName:}" failed. No retries permitted until 2025-11-08 00:21:20.26201392 +0000 UTC m=+5.489819783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ct97z" (UniqueName: "kubernetes.io/projected/9b79670c-af84-4e02-9cf2-f79b84fda98d-kube-api-access-ct97z") pod "kube-proxy-pnnwh" (UID: "9b79670c-af84-4e02-9cf2-f79b84fda98d") : configmap "kube-root-ca.crt" not found Nov 8 00:21:20.199550 systemd[1]: Created slice kubepods-besteffort-podf48ec3aa_fdbc_408b_97b8_ac1777372c07.slice - libcontainer container kubepods-besteffort-podf48ec3aa_fdbc_408b_97b8_ac1777372c07.slice. Nov 8 00:21:20.261209 kubelet[2518]: I1108 00:21:20.261145 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f48ec3aa-fdbc-408b-97b8-ac1777372c07-var-lib-calico\") pod \"tigera-operator-7dcd859c48-pfd6t\" (UID: \"f48ec3aa-fdbc-408b-97b8-ac1777372c07\") " pod="tigera-operator/tigera-operator-7dcd859c48-pfd6t" Nov 8 00:21:20.261209 kubelet[2518]: I1108 00:21:20.261189 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dm2c\" (UniqueName: \"kubernetes.io/projected/f48ec3aa-fdbc-408b-97b8-ac1777372c07-kube-api-access-4dm2c\") pod \"tigera-operator-7dcd859c48-pfd6t\" (UID: \"f48ec3aa-fdbc-408b-97b8-ac1777372c07\") " pod="tigera-operator/tigera-operator-7dcd859c48-pfd6t" Nov 8 00:21:20.503582 containerd[1457]: time="2025-11-08T00:21:20.503526267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-pfd6t,Uid:f48ec3aa-fdbc-408b-97b8-ac1777372c07,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:21:20.518207 kubelet[2518]: E1108 00:21:20.518008 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:20.518584 containerd[1457]: time="2025-11-08T00:21:20.518512481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pnnwh,Uid:9b79670c-af84-4e02-9cf2-f79b84fda98d,Namespace:kube-system,Attempt:0,}" Nov 8 00:21:20.531829 containerd[1457]: time="2025-11-08T00:21:20.531665183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:20.531982 containerd[1457]: time="2025-11-08T00:21:20.531754001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:20.531982 containerd[1457]: time="2025-11-08T00:21:20.531920164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:20.532913 containerd[1457]: time="2025-11-08T00:21:20.532850132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:20.552400 containerd[1457]: time="2025-11-08T00:21:20.552247402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:20.552542 containerd[1457]: time="2025-11-08T00:21:20.552413175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:20.552542 containerd[1457]: time="2025-11-08T00:21:20.552471355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:20.552772 containerd[1457]: time="2025-11-08T00:21:20.552647357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:20.555026 systemd[1]: Started cri-containerd-112ae318841bc9ab23492cfedaf7e28e12948ef086d2a45c7ead8721e3056cdb.scope - libcontainer container 112ae318841bc9ab23492cfedaf7e28e12948ef086d2a45c7ead8721e3056cdb. Nov 8 00:21:20.582052 systemd[1]: Started cri-containerd-8fa572935dfa540a84c9f9ab1fbba853ab034f130ab26ad95bf1b87b490bb10c.scope - libcontainer container 8fa572935dfa540a84c9f9ab1fbba853ab034f130ab26ad95bf1b87b490bb10c. Nov 8 00:21:20.596312 containerd[1457]: time="2025-11-08T00:21:20.596232138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-pfd6t,Uid:f48ec3aa-fdbc-408b-97b8-ac1777372c07,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"112ae318841bc9ab23492cfedaf7e28e12948ef086d2a45c7ead8721e3056cdb\"" Nov 8 00:21:20.599361 containerd[1457]: time="2025-11-08T00:21:20.599336784Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:21:20.610552 containerd[1457]: time="2025-11-08T00:21:20.610510075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pnnwh,Uid:9b79670c-af84-4e02-9cf2-f79b84fda98d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fa572935dfa540a84c9f9ab1fbba853ab034f130ab26ad95bf1b87b490bb10c\"" Nov 8 00:21:20.611423 kubelet[2518]: E1108 00:21:20.611396 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:20.617047 containerd[1457]: time="2025-11-08T00:21:20.617007772Z" level=info msg="CreateContainer within sandbox \"8fa572935dfa540a84c9f9ab1fbba853ab034f130ab26ad95bf1b87b490bb10c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:21:20.633802 containerd[1457]: time="2025-11-08T00:21:20.633748032Z" level=info msg="CreateContainer within sandbox \"8fa572935dfa540a84c9f9ab1fbba853ab034f130ab26ad95bf1b87b490bb10c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1ababaeed4d9208a07eefb5b20f51db3b4c787638e1f63c1d7a4b9133fa6f7fe\"" Nov 8 00:21:20.635706 containerd[1457]: time="2025-11-08T00:21:20.635673941Z" level=info msg="StartContainer for \"1ababaeed4d9208a07eefb5b20f51db3b4c787638e1f63c1d7a4b9133fa6f7fe\"" Nov 8 00:21:20.671968 systemd[1]: Started cri-containerd-1ababaeed4d9208a07eefb5b20f51db3b4c787638e1f63c1d7a4b9133fa6f7fe.scope - libcontainer container 1ababaeed4d9208a07eefb5b20f51db3b4c787638e1f63c1d7a4b9133fa6f7fe. Nov 8 00:21:20.704102 containerd[1457]: time="2025-11-08T00:21:20.704044572Z" level=info msg="StartContainer for \"1ababaeed4d9208a07eefb5b20f51db3b4c787638e1f63c1d7a4b9133fa6f7fe\" returns successfully" Nov 8 00:21:20.861604 kubelet[2518]: E1108 00:21:20.861373 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:21.020395 kubelet[2518]: E1108 00:21:21.020030 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:21.048284 kubelet[2518]: I1108 00:21:21.048157 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pnnwh" podStartSLOduration=2.048136538 podStartE2EDuration="2.048136538s" podCreationTimestamp="2025-11-08 00:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:21:20.874027994 +0000 UTC m=+6.101833887" watchObservedRunningTime="2025-11-08 00:21:21.048136538 +0000 UTC m=+6.275942411" Nov 8 00:21:21.714569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3185333284.mount: Deactivated successfully. Nov 8 00:21:21.865922 kubelet[2518]: E1108 00:21:21.865896 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:22.110626 containerd[1457]: time="2025-11-08T00:21:22.110550646Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:22.111659 containerd[1457]: time="2025-11-08T00:21:22.111597302Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:21:22.113106 containerd[1457]: time="2025-11-08T00:21:22.113079791Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:22.115480 containerd[1457]: time="2025-11-08T00:21:22.115420980Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:22.116416 containerd[1457]: time="2025-11-08T00:21:22.116387054Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.516936033s" Nov 8 00:21:22.116482 containerd[1457]: time="2025-11-08T00:21:22.116418192Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:21:22.121320 containerd[1457]: time="2025-11-08T00:21:22.121262737Z" level=info msg="CreateContainer within sandbox \"112ae318841bc9ab23492cfedaf7e28e12948ef086d2a45c7ead8721e3056cdb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:21:22.135409 containerd[1457]: time="2025-11-08T00:21:22.135360189Z" level=info msg="CreateContainer within sandbox \"112ae318841bc9ab23492cfedaf7e28e12948ef086d2a45c7ead8721e3056cdb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c3423a49233883b518934a0379a912b21d0a665b76a5f4b6939db3e4175462f8\"" Nov 8 00:21:22.135947 containerd[1457]: time="2025-11-08T00:21:22.135919525Z" level=info msg="StartContainer for \"c3423a49233883b518934a0379a912b21d0a665b76a5f4b6939db3e4175462f8\"" Nov 8 00:21:22.163977 systemd[1]: Started cri-containerd-c3423a49233883b518934a0379a912b21d0a665b76a5f4b6939db3e4175462f8.scope - libcontainer container c3423a49233883b518934a0379a912b21d0a665b76a5f4b6939db3e4175462f8. Nov 8 00:21:22.208177 containerd[1457]: time="2025-11-08T00:21:22.208107964Z" level=info msg="StartContainer for \"c3423a49233883b518934a0379a912b21d0a665b76a5f4b6939db3e4175462f8\" returns successfully" Nov 8 00:21:22.877091 kubelet[2518]: I1108 00:21:22.877034 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-pfd6t" podStartSLOduration=1.35871938 podStartE2EDuration="2.877015471s" podCreationTimestamp="2025-11-08 00:21:20 +0000 UTC" firstStartedPulling="2025-11-08 00:21:20.599024635 +0000 UTC m=+5.826830508" lastFinishedPulling="2025-11-08 00:21:22.117320726 +0000 UTC m=+7.345126599" observedRunningTime="2025-11-08 00:21:22.876774566 +0000 UTC m=+8.104580439" watchObservedRunningTime="2025-11-08 00:21:22.877015471 +0000 UTC m=+8.104821344" Nov 8 00:21:25.734877 kubelet[2518]: E1108 00:21:25.734828 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:27.593842 sudo[1640]: pam_unix(sudo:session): session closed for user root Nov 8 00:21:27.597103 sshd[1637]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:27.601007 systemd[1]: sshd@6-10.0.0.49:22-10.0.0.1:49670.service: Deactivated successfully. Nov 8 00:21:27.603455 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:21:27.603949 systemd[1]: session-7.scope: Consumed 5.932s CPU time, 159.3M memory peak, 0B memory swap peak. Nov 8 00:21:27.606080 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:21:27.607307 systemd-logind[1447]: Removed session 7. Nov 8 00:21:28.175090 kubelet[2518]: E1108 00:21:28.174816 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:28.879290 kubelet[2518]: E1108 00:21:28.879047 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:32.583075 systemd[1]: Created slice kubepods-besteffort-pod11c364ce_c651_48fe_b604_5d1ecdf7aad1.slice - libcontainer container kubepods-besteffort-pod11c364ce_c651_48fe_b604_5d1ecdf7aad1.slice. Nov 8 00:21:32.631137 systemd[1]: Created slice kubepods-besteffort-pod0f3f2f39_e48f_42ce_86c3_6f03782b331c.slice - libcontainer container kubepods-besteffort-pod0f3f2f39_e48f_42ce_86c3_6f03782b331c.slice. Nov 8 00:21:32.639422 kubelet[2518]: I1108 00:21:32.639376 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0f3f2f39-e48f-42ce-86c3-6f03782b331c-var-lib-calico\") pod \"calico-node-qgw9l\" (UID: \"0f3f2f39-e48f-42ce-86c3-6f03782b331c\") " pod="calico-system/calico-node-qgw9l" Nov 8 00:21:32.639422 kubelet[2518]: I1108 00:21:32.639426 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0f3f2f39-e48f-42ce-86c3-6f03782b331c-cni-net-dir\") pod \"calico-node-qgw9l\" (UID: \"0f3f2f39-e48f-42ce-86c3-6f03782b331c\") " pod="calico-system/calico-node-qgw9l" Nov 8 00:21:32.640775 kubelet[2518]: I1108 00:21:32.639447 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0f3f2f39-e48f-42ce-86c3-6f03782b331c-cni-log-dir\") pod \"calico-node-qgw9l\" (UID: \"0f3f2f39-e48f-42ce-86c3-6f03782b331c\") " pod="calico-system/calico-node-qgw9l" Nov 8 00:21:32.640775 kubelet[2518]: I1108 00:21:32.639494 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0f3f2f39-e48f-42ce-86c3-6f03782b331c-flexvol-driver-host\") pod \"calico-node-qgw9l\" (UID: \"0f3f2f39-e48f-42ce-86c3-6f03782b331c\") " pod="calico-system/calico-node-qgw9l" Nov 8 00:21:32.640775 kubelet[2518]: I1108 00:21:32.639518 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0f3f2f39-e48f-42ce-86c3-6f03782b331c-var-run-calico\") pod \"calico-node-qgw9l\" (UID: \"0f3f2f39-e48f-42ce-86c3-6f03782b331c\") " pod="calico-system/calico-node-qgw9l" Nov 8 00:21:32.640775 kubelet[2518]: I1108 00:21:32.639542 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2s5g\" (UniqueName: \"kubernetes.io/projected/11c364ce-c651-48fe-b604-5d1ecdf7aad1-kube-api-access-x2s5g\") pod \"calico-typha-996f659dd-kdlbn\" (UID: \"11c364ce-c651-48fe-b604-5d1ecdf7aad1\") " pod="calico-system/calico-typha-996f659dd-kdlbn" Nov 8 00:21:32.640775 kubelet[2518]: I1108 00:21:32.639579 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/11c364ce-c651-48fe-b604-5d1ecdf7aad1-typha-certs\") pod \"calico-typha-996f659dd-kdlbn\" (UID: \"11c364ce-c651-48fe-b604-5d1ecdf7aad1\") " pod="calico-system/calico-typha-996f659dd-kdlbn" Nov 8 00:21:32.640957 kubelet[2518]: I1108 00:21:32.639599 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f3f2f39-e48f-42ce-86c3-6f03782b331c-lib-modules\") pod \"calico-node-qgw9l\" (UID: \"0f3f2f39-e48f-42ce-86c3-6f03782b331c\") " pod="calico-system/calico-node-qgw9l" Nov 8 00:21:32.640957 kubelet[2518]: I1108 00:21:32.639621 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0f3f2f39-e48f-42ce-86c3-6f03782b331c-policysync\") pod \"calico-node-qgw9l\" (UID: \"0f3f2f39-e48f-42ce-86c3-6f03782b331c\") " pod="calico-system/calico-node-qgw9l" Nov 8 00:21:32.640957 kubelet[2518]: I1108 00:21:32.639641 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f3f2f39-e48f-42ce-86c3-6f03782b331c-xtables-lock\") pod \"calico-node-qgw9l\" (UID: \"0f3f2f39-e48f-42ce-86c3-6f03782b331c\") " pod="calico-system/calico-node-qgw9l" Nov 8 00:21:32.640957 kubelet[2518]: I1108 00:21:32.639664 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11c364ce-c651-48fe-b604-5d1ecdf7aad1-tigera-ca-bundle\") pod \"calico-typha-996f659dd-kdlbn\" (UID: \"11c364ce-c651-48fe-b604-5d1ecdf7aad1\") " pod="calico-system/calico-typha-996f659dd-kdlbn" Nov 8 00:21:32.640957 kubelet[2518]: I1108 00:21:32.639683 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxdxt\" (UniqueName: \"kubernetes.io/projected/0f3f2f39-e48f-42ce-86c3-6f03782b331c-kube-api-access-rxdxt\") pod \"calico-node-qgw9l\" (UID: \"0f3f2f39-e48f-42ce-86c3-6f03782b331c\") " pod="calico-system/calico-node-qgw9l" Nov 8 00:21:32.641194 kubelet[2518]: I1108 00:21:32.639702 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0f3f2f39-e48f-42ce-86c3-6f03782b331c-cni-bin-dir\") pod \"calico-node-qgw9l\" (UID: \"0f3f2f39-e48f-42ce-86c3-6f03782b331c\") " pod="calico-system/calico-node-qgw9l" Nov 8 00:21:32.641194 kubelet[2518]: I1108 00:21:32.639722 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0f3f2f39-e48f-42ce-86c3-6f03782b331c-node-certs\") pod \"calico-node-qgw9l\" (UID: \"0f3f2f39-e48f-42ce-86c3-6f03782b331c\") " pod="calico-system/calico-node-qgw9l" Nov 8 00:21:32.641194 kubelet[2518]: I1108 00:21:32.639742 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f3f2f39-e48f-42ce-86c3-6f03782b331c-tigera-ca-bundle\") pod \"calico-node-qgw9l\" (UID: \"0f3f2f39-e48f-42ce-86c3-6f03782b331c\") " pod="calico-system/calico-node-qgw9l" Nov 8 00:21:32.730038 kubelet[2518]: E1108 00:21:32.729974 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-67lwd" podUID="6fb889d5-2903-4e6b-a458-6fb9eecb4dcd" Nov 8 00:21:32.747169 kubelet[2518]: E1108 00:21:32.747138 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.747970 kubelet[2518]: W1108 00:21:32.747217 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.748078 kubelet[2518]: E1108 00:21:32.748035 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.753115 kubelet[2518]: E1108 00:21:32.753085 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.753169 kubelet[2518]: W1108 00:21:32.753114 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.753169 kubelet[2518]: E1108 00:21:32.753145 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.753511 kubelet[2518]: E1108 00:21:32.753482 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.753561 kubelet[2518]: W1108 00:21:32.753510 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.753561 kubelet[2518]: E1108 00:21:32.753534 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.753871 kubelet[2518]: E1108 00:21:32.753854 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.753871 kubelet[2518]: W1108 00:21:32.753867 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.753945 kubelet[2518]: E1108 00:21:32.753878 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.761654 kubelet[2518]: E1108 00:21:32.761617 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.761654 kubelet[2518]: W1108 00:21:32.761647 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.761736 kubelet[2518]: E1108 00:21:32.761671 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.831251 kubelet[2518]: E1108 00:21:32.831068 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.831251 kubelet[2518]: W1108 00:21:32.831098 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.831251 kubelet[2518]: E1108 00:21:32.831125 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.832716 kubelet[2518]: E1108 00:21:32.832698 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.832716 kubelet[2518]: W1108 00:21:32.832712 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.832834 kubelet[2518]: E1108 00:21:32.832726 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.833040 kubelet[2518]: E1108 00:21:32.833015 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.833040 kubelet[2518]: W1108 00:21:32.833029 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.833040 kubelet[2518]: E1108 00:21:32.833039 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.833379 kubelet[2518]: E1108 00:21:32.833310 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.833379 kubelet[2518]: W1108 00:21:32.833320 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.833379 kubelet[2518]: E1108 00:21:32.833330 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.833633 kubelet[2518]: E1108 00:21:32.833617 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.833633 kubelet[2518]: W1108 00:21:32.833630 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.833713 kubelet[2518]: E1108 00:21:32.833640 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.833929 kubelet[2518]: E1108 00:21:32.833855 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.833929 kubelet[2518]: W1108 00:21:32.833906 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.833929 kubelet[2518]: E1108 00:21:32.833917 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.834119 kubelet[2518]: E1108 00:21:32.834106 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.834171 kubelet[2518]: W1108 00:21:32.834135 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.834171 kubelet[2518]: E1108 00:21:32.834151 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.834506 kubelet[2518]: E1108 00:21:32.834482 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.834506 kubelet[2518]: W1108 00:21:32.834497 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.834705 kubelet[2518]: E1108 00:21:32.834521 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.835019 kubelet[2518]: E1108 00:21:32.834902 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.835019 kubelet[2518]: W1108 00:21:32.834917 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.835019 kubelet[2518]: E1108 00:21:32.834929 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.835275 kubelet[2518]: E1108 00:21:32.835187 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.835275 kubelet[2518]: W1108 00:21:32.835200 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.835275 kubelet[2518]: E1108 00:21:32.835212 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.836230 kubelet[2518]: E1108 00:21:32.835620 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.836230 kubelet[2518]: W1108 00:21:32.835636 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.836230 kubelet[2518]: E1108 00:21:32.835648 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.836230 kubelet[2518]: E1108 00:21:32.836107 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.836230 kubelet[2518]: W1108 00:21:32.836120 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.836230 kubelet[2518]: E1108 00:21:32.836135 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.836470 kubelet[2518]: E1108 00:21:32.836459 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.836509 kubelet[2518]: W1108 00:21:32.836470 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.836509 kubelet[2518]: E1108 00:21:32.836483 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.836734 kubelet[2518]: E1108 00:21:32.836706 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.836734 kubelet[2518]: W1108 00:21:32.836719 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.836734 kubelet[2518]: E1108 00:21:32.836730 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.837010 kubelet[2518]: E1108 00:21:32.836992 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.837010 kubelet[2518]: W1108 00:21:32.837007 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.837075 kubelet[2518]: E1108 00:21:32.837018 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.837296 kubelet[2518]: E1108 00:21:32.837257 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.837296 kubelet[2518]: W1108 00:21:32.837274 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.837296 kubelet[2518]: E1108 00:21:32.837285 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.837589 kubelet[2518]: E1108 00:21:32.837568 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.837589 kubelet[2518]: W1108 00:21:32.837582 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.837674 kubelet[2518]: E1108 00:21:32.837594 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.838852 kubelet[2518]: E1108 00:21:32.838051 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.838852 kubelet[2518]: W1108 00:21:32.838068 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.838852 kubelet[2518]: E1108 00:21:32.838079 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.838852 kubelet[2518]: E1108 00:21:32.838302 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.838852 kubelet[2518]: W1108 00:21:32.838312 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.838852 kubelet[2518]: E1108 00:21:32.838323 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.838852 kubelet[2518]: E1108 00:21:32.838530 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.838852 kubelet[2518]: W1108 00:21:32.838541 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.838852 kubelet[2518]: E1108 00:21:32.838564 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.841910 kubelet[2518]: E1108 00:21:32.841891 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.841910 kubelet[2518]: W1108 00:21:32.841906 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.841991 kubelet[2518]: E1108 00:21:32.841920 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.841991 kubelet[2518]: I1108 00:21:32.841955 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6fb889d5-2903-4e6b-a458-6fb9eecb4dcd-kubelet-dir\") pod \"csi-node-driver-67lwd\" (UID: \"6fb889d5-2903-4e6b-a458-6fb9eecb4dcd\") " pod="calico-system/csi-node-driver-67lwd" Nov 8 00:21:32.842172 kubelet[2518]: E1108 00:21:32.842153 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.842172 kubelet[2518]: W1108 00:21:32.842169 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.842225 kubelet[2518]: E1108 00:21:32.842180 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.842247 kubelet[2518]: I1108 00:21:32.842225 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6fb889d5-2903-4e6b-a458-6fb9eecb4dcd-registration-dir\") pod \"csi-node-driver-67lwd\" (UID: \"6fb889d5-2903-4e6b-a458-6fb9eecb4dcd\") " pod="calico-system/csi-node-driver-67lwd" Nov 8 00:21:32.842461 kubelet[2518]: E1108 00:21:32.842444 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.842461 kubelet[2518]: W1108 00:21:32.842458 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.842524 kubelet[2518]: E1108 00:21:32.842468 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.842524 kubelet[2518]: I1108 00:21:32.842489 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6fb889d5-2903-4e6b-a458-6fb9eecb4dcd-varrun\") pod \"csi-node-driver-67lwd\" (UID: \"6fb889d5-2903-4e6b-a458-6fb9eecb4dcd\") " pod="calico-system/csi-node-driver-67lwd" Nov 8 00:21:32.842792 kubelet[2518]: E1108 00:21:32.842770 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.842792 kubelet[2518]: W1108 00:21:32.842788 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.842864 kubelet[2518]: E1108 00:21:32.842800 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.843044 kubelet[2518]: E1108 00:21:32.843027 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.843044 kubelet[2518]: W1108 00:21:32.843038 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.843108 kubelet[2518]: E1108 00:21:32.843047 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.843369 kubelet[2518]: E1108 00:21:32.843337 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.843369 kubelet[2518]: W1108 00:21:32.843364 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.843437 kubelet[2518]: E1108 00:21:32.843375 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.843599 kubelet[2518]: E1108 00:21:32.843579 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.843599 kubelet[2518]: W1108 00:21:32.843595 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.843714 kubelet[2518]: E1108 00:21:32.843606 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.843887 kubelet[2518]: E1108 00:21:32.843852 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.843887 kubelet[2518]: W1108 00:21:32.843867 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.843887 kubelet[2518]: E1108 00:21:32.843879 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.844669 kubelet[2518]: I1108 00:21:32.844020 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glmpn\" (UniqueName: \"kubernetes.io/projected/6fb889d5-2903-4e6b-a458-6fb9eecb4dcd-kube-api-access-glmpn\") pod \"csi-node-driver-67lwd\" (UID: \"6fb889d5-2903-4e6b-a458-6fb9eecb4dcd\") " pod="calico-system/csi-node-driver-67lwd" Nov 8 00:21:32.844978 kubelet[2518]: E1108 00:21:32.844653 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.844978 kubelet[2518]: W1108 00:21:32.844786 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.844978 kubelet[2518]: E1108 00:21:32.844830 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.845429 kubelet[2518]: E1108 00:21:32.845332 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.845639 kubelet[2518]: W1108 00:21:32.845624 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.845893 kubelet[2518]: E1108 00:21:32.845776 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.846234 kubelet[2518]: E1108 00:21:32.846209 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.846234 kubelet[2518]: W1108 00:21:32.846221 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.846234 kubelet[2518]: E1108 00:21:32.846232 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.846491 kubelet[2518]: E1108 00:21:32.846467 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.846530 kubelet[2518]: W1108 00:21:32.846497 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.846530 kubelet[2518]: E1108 00:21:32.846508 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.846815 kubelet[2518]: E1108 00:21:32.846771 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.846815 kubelet[2518]: W1108 00:21:32.846782 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.846815 kubelet[2518]: E1108 00:21:32.846794 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.846934 kubelet[2518]: I1108 00:21:32.846842 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6fb889d5-2903-4e6b-a458-6fb9eecb4dcd-socket-dir\") pod \"csi-node-driver-67lwd\" (UID: \"6fb889d5-2903-4e6b-a458-6fb9eecb4dcd\") " pod="calico-system/csi-node-driver-67lwd" Nov 8 00:21:32.847352 kubelet[2518]: E1108 00:21:32.847160 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.847352 kubelet[2518]: W1108 00:21:32.847183 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.847352 kubelet[2518]: E1108 00:21:32.847207 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.847653 kubelet[2518]: E1108 00:21:32.847634 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.847653 kubelet[2518]: W1108 00:21:32.847647 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.847745 kubelet[2518]: E1108 00:21:32.847659 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.888082 kubelet[2518]: E1108 00:21:32.888044 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:32.891667 containerd[1457]: time="2025-11-08T00:21:32.891618688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-996f659dd-kdlbn,Uid:11c364ce-c651-48fe-b604-5d1ecdf7aad1,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:32.921642 containerd[1457]: time="2025-11-08T00:21:32.921517288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:32.921642 containerd[1457]: time="2025-11-08T00:21:32.921587349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:32.921642 containerd[1457]: time="2025-11-08T00:21:32.921598390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:32.921878 containerd[1457]: time="2025-11-08T00:21:32.921679182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:32.934060 kubelet[2518]: E1108 00:21:32.934026 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:32.934709 containerd[1457]: time="2025-11-08T00:21:32.934669425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qgw9l,Uid:0f3f2f39-e48f-42ce-86c3-6f03782b331c,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:32.947372 kubelet[2518]: E1108 00:21:32.947225 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.947372 kubelet[2518]: W1108 00:21:32.947246 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.947372 kubelet[2518]: E1108 00:21:32.947265 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.947731 kubelet[2518]: E1108 00:21:32.947649 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.947731 kubelet[2518]: W1108 00:21:32.947662 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.947731 kubelet[2518]: E1108 00:21:32.947673 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.948105 kubelet[2518]: E1108 00:21:32.948040 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.948105 kubelet[2518]: W1108 00:21:32.948051 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.948105 kubelet[2518]: E1108 00:21:32.948060 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.948392 kubelet[2518]: E1108 00:21:32.948361 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.948433 kubelet[2518]: W1108 00:21:32.948390 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.948433 kubelet[2518]: E1108 00:21:32.948416 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.948680 kubelet[2518]: E1108 00:21:32.948663 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.948680 kubelet[2518]: W1108 00:21:32.948674 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.948740 kubelet[2518]: E1108 00:21:32.948684 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.948964 kubelet[2518]: E1108 00:21:32.948950 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.948964 kubelet[2518]: W1108 00:21:32.948960 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.949053 kubelet[2518]: E1108 00:21:32.948968 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.949197 kubelet[2518]: E1108 00:21:32.949183 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.949197 kubelet[2518]: W1108 00:21:32.949193 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.949280 kubelet[2518]: E1108 00:21:32.949201 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.949427 kubelet[2518]: E1108 00:21:32.949409 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.949427 kubelet[2518]: W1108 00:21:32.949421 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.949504 kubelet[2518]: E1108 00:21:32.949429 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.949773 kubelet[2518]: E1108 00:21:32.949758 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.949773 kubelet[2518]: W1108 00:21:32.949769 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.949882 kubelet[2518]: E1108 00:21:32.949779 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.950045 kubelet[2518]: E1108 00:21:32.950031 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.950045 kubelet[2518]: W1108 00:21:32.950041 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.950112 kubelet[2518]: E1108 00:21:32.950050 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.950302 kubelet[2518]: E1108 00:21:32.950279 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.950302 kubelet[2518]: W1108 00:21:32.950290 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.950302 kubelet[2518]: E1108 00:21:32.950297 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.950589 kubelet[2518]: E1108 00:21:32.950556 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.950589 kubelet[2518]: W1108 00:21:32.950570 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.950589 kubelet[2518]: E1108 00:21:32.950581 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.950954 kubelet[2518]: E1108 00:21:32.950938 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.950954 kubelet[2518]: W1108 00:21:32.950951 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.951022 kubelet[2518]: E1108 00:21:32.950960 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.951009 systemd[1]: Started cri-containerd-56b40f392b3a68d877d40bdc19cb1a31178c927c9da63b6dec3c8d3bf611bf6a.scope - libcontainer container 56b40f392b3a68d877d40bdc19cb1a31178c927c9da63b6dec3c8d3bf611bf6a. Nov 8 00:21:32.951201 kubelet[2518]: E1108 00:21:32.951185 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.951201 kubelet[2518]: W1108 00:21:32.951199 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.951257 kubelet[2518]: E1108 00:21:32.951208 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.951590 kubelet[2518]: E1108 00:21:32.951525 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.951590 kubelet[2518]: W1108 00:21:32.951536 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.951590 kubelet[2518]: E1108 00:21:32.951544 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.951875 kubelet[2518]: E1108 00:21:32.951845 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.951875 kubelet[2518]: W1108 00:21:32.951857 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.951875 kubelet[2518]: E1108 00:21:32.951865 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.952122 kubelet[2518]: E1108 00:21:32.952106 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.952122 kubelet[2518]: W1108 00:21:32.952117 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.952192 kubelet[2518]: E1108 00:21:32.952125 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.952356 kubelet[2518]: E1108 00:21:32.952340 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.952356 kubelet[2518]: W1108 00:21:32.952351 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.952416 kubelet[2518]: E1108 00:21:32.952359 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.952619 kubelet[2518]: E1108 00:21:32.952585 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.952619 kubelet[2518]: W1108 00:21:32.952598 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.952619 kubelet[2518]: E1108 00:21:32.952606 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.952896 kubelet[2518]: E1108 00:21:32.952880 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.952896 kubelet[2518]: W1108 00:21:32.952894 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.952955 kubelet[2518]: E1108 00:21:32.952903 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.953117 kubelet[2518]: E1108 00:21:32.953103 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.953117 kubelet[2518]: W1108 00:21:32.953114 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.953200 kubelet[2518]: E1108 00:21:32.953122 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.953346 kubelet[2518]: E1108 00:21:32.953331 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.953346 kubelet[2518]: W1108 00:21:32.953342 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.953402 kubelet[2518]: E1108 00:21:32.953350 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.953604 kubelet[2518]: E1108 00:21:32.953586 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.953604 kubelet[2518]: W1108 00:21:32.953599 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.953604 kubelet[2518]: E1108 00:21:32.953610 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.954661 kubelet[2518]: E1108 00:21:32.954348 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.954661 kubelet[2518]: W1108 00:21:32.954374 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.954661 kubelet[2518]: E1108 00:21:32.954396 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.955014 kubelet[2518]: E1108 00:21:32.954987 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.955014 kubelet[2518]: W1108 00:21:32.955004 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.955072 kubelet[2518]: E1108 00:21:32.955017 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.963025 containerd[1457]: time="2025-11-08T00:21:32.962657710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:32.963025 containerd[1457]: time="2025-11-08T00:21:32.962732720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:32.963025 containerd[1457]: time="2025-11-08T00:21:32.962752949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:32.963025 containerd[1457]: time="2025-11-08T00:21:32.962897161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:32.966458 kubelet[2518]: E1108 00:21:32.966434 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:32.966531 kubelet[2518]: W1108 00:21:32.966458 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:32.966531 kubelet[2518]: E1108 00:21:32.966486 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:32.984078 systemd[1]: Started cri-containerd-eccd74d42cdcc56b1a84ca74b3bb76c6761d9fe474e6d9ffcec987e70ec6ef6f.scope - libcontainer container eccd74d42cdcc56b1a84ca74b3bb76c6761d9fe474e6d9ffcec987e70ec6ef6f. Nov 8 00:21:33.006540 containerd[1457]: time="2025-11-08T00:21:33.006498420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-996f659dd-kdlbn,Uid:11c364ce-c651-48fe-b604-5d1ecdf7aad1,Namespace:calico-system,Attempt:0,} returns sandbox id \"56b40f392b3a68d877d40bdc19cb1a31178c927c9da63b6dec3c8d3bf611bf6a\"" Nov 8 00:21:33.009877 kubelet[2518]: E1108 00:21:33.009850 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:33.012619 containerd[1457]: time="2025-11-08T00:21:33.012563651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qgw9l,Uid:0f3f2f39-e48f-42ce-86c3-6f03782b331c,Namespace:calico-system,Attempt:0,} returns sandbox id \"eccd74d42cdcc56b1a84ca74b3bb76c6761d9fe474e6d9ffcec987e70ec6ef6f\"" Nov 8 00:21:33.012619 containerd[1457]: time="2025-11-08T00:21:33.012591142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:21:33.013398 kubelet[2518]: E1108 00:21:33.013380 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:34.706893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2130005022.mount: Deactivated successfully. Nov 8 00:21:34.845374 kubelet[2518]: E1108 00:21:34.845316 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-67lwd" podUID="6fb889d5-2903-4e6b-a458-6fb9eecb4dcd" Nov 8 00:21:35.116046 containerd[1457]: time="2025-11-08T00:21:35.115979859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:35.117038 containerd[1457]: time="2025-11-08T00:21:35.116975100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:21:35.118789 containerd[1457]: time="2025-11-08T00:21:35.118760187Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:35.122237 containerd[1457]: time="2025-11-08T00:21:35.122179126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:35.122666 containerd[1457]: time="2025-11-08T00:21:35.122628230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.109982295s" Nov 8 00:21:35.122666 containerd[1457]: time="2025-11-08T00:21:35.122658076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:21:35.124422 containerd[1457]: time="2025-11-08T00:21:35.124336362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:21:35.151264 containerd[1457]: time="2025-11-08T00:21:35.151119548Z" level=info msg="CreateContainer within sandbox \"56b40f392b3a68d877d40bdc19cb1a31178c927c9da63b6dec3c8d3bf611bf6a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:21:35.173412 containerd[1457]: time="2025-11-08T00:21:35.173370801Z" level=info msg="CreateContainer within sandbox \"56b40f392b3a68d877d40bdc19cb1a31178c927c9da63b6dec3c8d3bf611bf6a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"96009772e0228310bc2b342d9417a49b20c3ce416dbf01b6417341d30e70f5e0\"" Nov 8 00:21:35.176824 containerd[1457]: time="2025-11-08T00:21:35.174447446Z" level=info msg="StartContainer for \"96009772e0228310bc2b342d9417a49b20c3ce416dbf01b6417341d30e70f5e0\"" Nov 8 00:21:35.224953 systemd[1]: Started cri-containerd-96009772e0228310bc2b342d9417a49b20c3ce416dbf01b6417341d30e70f5e0.scope - libcontainer container 96009772e0228310bc2b342d9417a49b20c3ce416dbf01b6417341d30e70f5e0. Nov 8 00:21:35.269920 containerd[1457]: time="2025-11-08T00:21:35.269769366Z" level=info msg="StartContainer for \"96009772e0228310bc2b342d9417a49b20c3ce416dbf01b6417341d30e70f5e0\" returns successfully" Nov 8 00:21:35.896080 kubelet[2518]: E1108 00:21:35.896041 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:35.911658 kubelet[2518]: I1108 00:21:35.911390 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-996f659dd-kdlbn" podStartSLOduration=1.7995823949999998 podStartE2EDuration="3.911374293s" podCreationTimestamp="2025-11-08 00:21:32 +0000 UTC" firstStartedPulling="2025-11-08 00:21:33.012236175 +0000 UTC m=+18.240042048" lastFinishedPulling="2025-11-08 00:21:35.124028073 +0000 UTC m=+20.351833946" observedRunningTime="2025-11-08 00:21:35.910629342 +0000 UTC m=+21.138435235" watchObservedRunningTime="2025-11-08 00:21:35.911374293 +0000 UTC m=+21.139180166" Nov 8 00:21:35.959542 kubelet[2518]: E1108 00:21:35.959486 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.959542 kubelet[2518]: W1108 00:21:35.959531 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.959700 kubelet[2518]: E1108 00:21:35.959558 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.959829 kubelet[2518]: E1108 00:21:35.959796 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.959874 kubelet[2518]: W1108 00:21:35.959833 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.959874 kubelet[2518]: E1108 00:21:35.959845 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.960141 kubelet[2518]: E1108 00:21:35.960122 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.960178 kubelet[2518]: W1108 00:21:35.960153 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.960178 kubelet[2518]: E1108 00:21:35.960167 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.960449 kubelet[2518]: E1108 00:21:35.960424 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.960449 kubelet[2518]: W1108 00:21:35.960439 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.960505 kubelet[2518]: E1108 00:21:35.960450 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.960680 kubelet[2518]: E1108 00:21:35.960664 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.960680 kubelet[2518]: W1108 00:21:35.960677 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.960727 kubelet[2518]: E1108 00:21:35.960689 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.960940 kubelet[2518]: E1108 00:21:35.960925 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.960940 kubelet[2518]: W1108 00:21:35.960938 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.961000 kubelet[2518]: E1108 00:21:35.960948 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.961165 kubelet[2518]: E1108 00:21:35.961149 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.961165 kubelet[2518]: W1108 00:21:35.961162 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.961222 kubelet[2518]: E1108 00:21:35.961172 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.961501 kubelet[2518]: E1108 00:21:35.961463 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.961545 kubelet[2518]: W1108 00:21:35.961490 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.961545 kubelet[2518]: E1108 00:21:35.961528 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.961852 kubelet[2518]: E1108 00:21:35.961840 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.961852 kubelet[2518]: W1108 00:21:35.961850 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.961913 kubelet[2518]: E1108 00:21:35.961860 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.962098 kubelet[2518]: E1108 00:21:35.962086 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.962098 kubelet[2518]: W1108 00:21:35.962097 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.962163 kubelet[2518]: E1108 00:21:35.962108 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.962345 kubelet[2518]: E1108 00:21:35.962334 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.962381 kubelet[2518]: W1108 00:21:35.962345 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.962381 kubelet[2518]: E1108 00:21:35.962354 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.962609 kubelet[2518]: E1108 00:21:35.962596 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.962609 kubelet[2518]: W1108 00:21:35.962607 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.962659 kubelet[2518]: E1108 00:21:35.962616 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.962869 kubelet[2518]: E1108 00:21:35.962856 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.962901 kubelet[2518]: W1108 00:21:35.962867 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.962901 kubelet[2518]: E1108 00:21:35.962886 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.963158 kubelet[2518]: E1108 00:21:35.963134 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.963191 kubelet[2518]: W1108 00:21:35.963157 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.963191 kubelet[2518]: E1108 00:21:35.963179 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.963449 kubelet[2518]: E1108 00:21:35.963433 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.963449 kubelet[2518]: W1108 00:21:35.963446 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.963503 kubelet[2518]: E1108 00:21:35.963456 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.971906 kubelet[2518]: E1108 00:21:35.971885 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.971906 kubelet[2518]: W1108 00:21:35.971900 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.971995 kubelet[2518]: E1108 00:21:35.971912 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.972687 kubelet[2518]: E1108 00:21:35.972675 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.972687 kubelet[2518]: W1108 00:21:35.972684 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.972744 kubelet[2518]: E1108 00:21:35.972693 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.972935 kubelet[2518]: E1108 00:21:35.972924 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.972935 kubelet[2518]: W1108 00:21:35.972933 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.972994 kubelet[2518]: E1108 00:21:35.972941 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.973189 kubelet[2518]: E1108 00:21:35.973172 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.973189 kubelet[2518]: W1108 00:21:35.973182 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.973237 kubelet[2518]: E1108 00:21:35.973190 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.973414 kubelet[2518]: E1108 00:21:35.973403 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.973414 kubelet[2518]: W1108 00:21:35.973411 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.973465 kubelet[2518]: E1108 00:21:35.973419 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.973633 kubelet[2518]: E1108 00:21:35.973622 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.973633 kubelet[2518]: W1108 00:21:35.973631 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.973676 kubelet[2518]: E1108 00:21:35.973638 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.973910 kubelet[2518]: E1108 00:21:35.973893 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.973910 kubelet[2518]: W1108 00:21:35.973905 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.973971 kubelet[2518]: E1108 00:21:35.973913 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.974133 kubelet[2518]: E1108 00:21:35.974120 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.974133 kubelet[2518]: W1108 00:21:35.974128 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.974176 kubelet[2518]: E1108 00:21:35.974136 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.974364 kubelet[2518]: E1108 00:21:35.974350 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.974364 kubelet[2518]: W1108 00:21:35.974360 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.974417 kubelet[2518]: E1108 00:21:35.974369 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.974615 kubelet[2518]: E1108 00:21:35.974604 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.974615 kubelet[2518]: W1108 00:21:35.974613 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.974664 kubelet[2518]: E1108 00:21:35.974620 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.974837 kubelet[2518]: E1108 00:21:35.974827 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.974837 kubelet[2518]: W1108 00:21:35.974835 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.974889 kubelet[2518]: E1108 00:21:35.974843 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.975038 kubelet[2518]: E1108 00:21:35.975026 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.975038 kubelet[2518]: W1108 00:21:35.975034 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.975090 kubelet[2518]: E1108 00:21:35.975043 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.975434 kubelet[2518]: E1108 00:21:35.975395 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.975434 kubelet[2518]: W1108 00:21:35.975425 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.975478 kubelet[2518]: E1108 00:21:35.975447 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.975692 kubelet[2518]: E1108 00:21:35.975671 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.975692 kubelet[2518]: W1108 00:21:35.975684 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.975739 kubelet[2518]: E1108 00:21:35.975695 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.975948 kubelet[2518]: E1108 00:21:35.975932 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.975948 kubelet[2518]: W1108 00:21:35.975945 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.976144 kubelet[2518]: E1108 00:21:35.975956 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.976187 kubelet[2518]: E1108 00:21:35.976171 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.976187 kubelet[2518]: W1108 00:21:35.976184 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.976238 kubelet[2518]: E1108 00:21:35.976195 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.976460 kubelet[2518]: E1108 00:21:35.976443 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.976460 kubelet[2518]: W1108 00:21:35.976457 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.976520 kubelet[2518]: E1108 00:21:35.976467 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:35.976918 kubelet[2518]: E1108 00:21:35.976900 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:35.976918 kubelet[2518]: W1108 00:21:35.976913 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:35.976980 kubelet[2518]: E1108 00:21:35.976924 2518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:36.561877 containerd[1457]: time="2025-11-08T00:21:36.561789664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:36.562781 containerd[1457]: time="2025-11-08T00:21:36.562738979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:21:36.564071 containerd[1457]: time="2025-11-08T00:21:36.564005671Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:36.566039 containerd[1457]: time="2025-11-08T00:21:36.566011432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:36.566643 containerd[1457]: time="2025-11-08T00:21:36.566599417Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.442186862s" Nov 8 00:21:36.566699 containerd[1457]: time="2025-11-08T00:21:36.566647077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:21:36.571631 containerd[1457]: time="2025-11-08T00:21:36.571592395Z" level=info msg="CreateContainer within sandbox \"eccd74d42cdcc56b1a84ca74b3bb76c6761d9fe474e6d9ffcec987e70ec6ef6f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:21:36.586819 containerd[1457]: time="2025-11-08T00:21:36.586756576Z" level=info msg="CreateContainer within sandbox \"eccd74d42cdcc56b1a84ca74b3bb76c6761d9fe474e6d9ffcec987e70ec6ef6f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e60a4764f9c948845677cea80704d228c281221fb0b03cff1ace45f9187268af\"" Nov 8 00:21:36.587927 containerd[1457]: time="2025-11-08T00:21:36.587883144Z" level=info msg="StartContainer for \"e60a4764f9c948845677cea80704d228c281221fb0b03cff1ace45f9187268af\"" Nov 8 00:21:36.619956 systemd[1]: Started cri-containerd-e60a4764f9c948845677cea80704d228c281221fb0b03cff1ace45f9187268af.scope - libcontainer container e60a4764f9c948845677cea80704d228c281221fb0b03cff1ace45f9187268af. Nov 8 00:21:36.681836 systemd[1]: cri-containerd-e60a4764f9c948845677cea80704d228c281221fb0b03cff1ace45f9187268af.scope: Deactivated successfully. Nov 8 00:21:36.843719 kubelet[2518]: E1108 00:21:36.843541 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-67lwd" podUID="6fb889d5-2903-4e6b-a458-6fb9eecb4dcd" Nov 8 00:21:36.881260 containerd[1457]: time="2025-11-08T00:21:36.881196624Z" level=info msg="StartContainer for \"e60a4764f9c948845677cea80704d228c281221fb0b03cff1ace45f9187268af\" returns successfully" Nov 8 00:21:36.898895 kubelet[2518]: I1108 00:21:36.898861 2518 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:21:36.899276 kubelet[2518]: E1108 00:21:36.899256 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:36.900031 kubelet[2518]: E1108 00:21:36.900007 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:36.905322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e60a4764f9c948845677cea80704d228c281221fb0b03cff1ace45f9187268af-rootfs.mount: Deactivated successfully. Nov 8 00:21:36.914221 containerd[1457]: time="2025-11-08T00:21:36.911592383Z" level=info msg="shim disconnected" id=e60a4764f9c948845677cea80704d228c281221fb0b03cff1ace45f9187268af namespace=k8s.io Nov 8 00:21:36.914221 containerd[1457]: time="2025-11-08T00:21:36.914192150Z" level=warning msg="cleaning up after shim disconnected" id=e60a4764f9c948845677cea80704d228c281221fb0b03cff1ace45f9187268af namespace=k8s.io Nov 8 00:21:36.914221 containerd[1457]: time="2025-11-08T00:21:36.914202891Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:21:37.901644 kubelet[2518]: E1108 00:21:37.901605 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:37.902518 containerd[1457]: time="2025-11-08T00:21:37.902474780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:21:38.844505 kubelet[2518]: E1108 00:21:38.844435 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-67lwd" podUID="6fb889d5-2903-4e6b-a458-6fb9eecb4dcd" Nov 8 00:21:40.845151 kubelet[2518]: E1108 00:21:40.845106 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-67lwd" podUID="6fb889d5-2903-4e6b-a458-6fb9eecb4dcd" Nov 8 00:21:41.363535 containerd[1457]: time="2025-11-08T00:21:41.363468830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:41.364694 containerd[1457]: time="2025-11-08T00:21:41.364617738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:21:41.365695 containerd[1457]: time="2025-11-08T00:21:41.365650218Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:41.368411 containerd[1457]: time="2025-11-08T00:21:41.368373213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:41.369163 containerd[1457]: time="2025-11-08T00:21:41.369125477Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.466610872s" Nov 8 00:21:41.369163 containerd[1457]: time="2025-11-08T00:21:41.369155333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:21:41.374159 containerd[1457]: time="2025-11-08T00:21:41.374126192Z" level=info msg="CreateContainer within sandbox \"eccd74d42cdcc56b1a84ca74b3bb76c6761d9fe474e6d9ffcec987e70ec6ef6f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:21:41.388852 containerd[1457]: time="2025-11-08T00:21:41.388789790Z" level=info msg="CreateContainer within sandbox \"eccd74d42cdcc56b1a84ca74b3bb76c6761d9fe474e6d9ffcec987e70ec6ef6f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"52a41bf90f931489ff983d96d89074c8bc734c31160cd61a00a334a84f2fa3ee\"" Nov 8 00:21:41.389362 containerd[1457]: time="2025-11-08T00:21:41.389337359Z" level=info msg="StartContainer for \"52a41bf90f931489ff983d96d89074c8bc734c31160cd61a00a334a84f2fa3ee\"" Nov 8 00:21:41.433965 systemd[1]: Started cri-containerd-52a41bf90f931489ff983d96d89074c8bc734c31160cd61a00a334a84f2fa3ee.scope - libcontainer container 52a41bf90f931489ff983d96d89074c8bc734c31160cd61a00a334a84f2fa3ee. Nov 8 00:21:41.541522 containerd[1457]: time="2025-11-08T00:21:41.541475886Z" level=info msg="StartContainer for \"52a41bf90f931489ff983d96d89074c8bc734c31160cd61a00a334a84f2fa3ee\" returns successfully" Nov 8 00:21:41.910020 kubelet[2518]: E1108 00:21:41.909968 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:42.844432 kubelet[2518]: E1108 00:21:42.844340 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-67lwd" podUID="6fb889d5-2903-4e6b-a458-6fb9eecb4dcd" Nov 8 00:21:42.905936 systemd[1]: cri-containerd-52a41bf90f931489ff983d96d89074c8bc734c31160cd61a00a334a84f2fa3ee.scope: Deactivated successfully. Nov 8 00:21:42.911377 kubelet[2518]: E1108 00:21:42.911334 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:42.932129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52a41bf90f931489ff983d96d89074c8bc734c31160cd61a00a334a84f2fa3ee-rootfs.mount: Deactivated successfully. Nov 8 00:21:42.936968 containerd[1457]: time="2025-11-08T00:21:42.936883540Z" level=info msg="shim disconnected" id=52a41bf90f931489ff983d96d89074c8bc734c31160cd61a00a334a84f2fa3ee namespace=k8s.io Nov 8 00:21:42.936968 containerd[1457]: time="2025-11-08T00:21:42.936947861Z" level=warning msg="cleaning up after shim disconnected" id=52a41bf90f931489ff983d96d89074c8bc734c31160cd61a00a334a84f2fa3ee namespace=k8s.io Nov 8 00:21:42.936968 containerd[1457]: time="2025-11-08T00:21:42.936959463Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:21:42.950697 kubelet[2518]: I1108 00:21:42.950643 2518 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:21:43.017875 systemd[1]: Created slice kubepods-burstable-pod4c9a132b_c373_4aff_a37f_8a647d110275.slice - libcontainer container kubepods-burstable-pod4c9a132b_c373_4aff_a37f_8a647d110275.slice. Nov 8 00:21:43.020992 kubelet[2518]: I1108 00:21:43.020931 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6c6j\" (UniqueName: \"kubernetes.io/projected/4c9a132b-c373-4aff-a37f-8a647d110275-kube-api-access-p6c6j\") pod \"coredns-674b8bbfcf-c5rwc\" (UID: \"4c9a132b-c373-4aff-a37f-8a647d110275\") " pod="kube-system/coredns-674b8bbfcf-c5rwc" Nov 8 00:21:43.020992 kubelet[2518]: I1108 00:21:43.020963 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c9a132b-c373-4aff-a37f-8a647d110275-config-volume\") pod \"coredns-674b8bbfcf-c5rwc\" (UID: \"4c9a132b-c373-4aff-a37f-8a647d110275\") " pod="kube-system/coredns-674b8bbfcf-c5rwc" Nov 8 00:21:43.029110 systemd[1]: Created slice kubepods-besteffort-pod5ca7b27d_c4bf_4555_ac93_a9fa936c758c.slice - libcontainer container kubepods-besteffort-pod5ca7b27d_c4bf_4555_ac93_a9fa936c758c.slice. Nov 8 00:21:43.044204 systemd[1]: Created slice kubepods-burstable-podda9743a8_c863_487c_b161_786bc9c10f6c.slice - libcontainer container kubepods-burstable-podda9743a8_c863_487c_b161_786bc9c10f6c.slice. Nov 8 00:21:43.055285 systemd[1]: Created slice kubepods-besteffort-podad60f746_d64e_4394_bbcc_99e4406b9d56.slice - libcontainer container kubepods-besteffort-podad60f746_d64e_4394_bbcc_99e4406b9d56.slice. Nov 8 00:21:43.062463 kubelet[2518]: I1108 00:21:43.062431 2518 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:21:43.062821 kubelet[2518]: E1108 00:21:43.062791 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:43.064510 systemd[1]: Created slice kubepods-besteffort-pod2c05d744_421f_40ca_8faf_61db719dbbcd.slice - libcontainer container kubepods-besteffort-pod2c05d744_421f_40ca_8faf_61db719dbbcd.slice. Nov 8 00:21:43.070600 systemd[1]: Created slice kubepods-besteffort-pod142f1b5e_d697_492e_b151_110e3673f549.slice - libcontainer container kubepods-besteffort-pod142f1b5e_d697_492e_b151_110e3673f549.slice. Nov 8 00:21:43.076434 systemd[1]: Created slice kubepods-besteffort-pod0eac0b9a_9cb8_40ad_80af_819a17da25f0.slice - libcontainer container kubepods-besteffort-pod0eac0b9a_9cb8_40ad_80af_819a17da25f0.slice. Nov 8 00:21:43.122235 kubelet[2518]: I1108 00:21:43.122076 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg9qv\" (UniqueName: \"kubernetes.io/projected/142f1b5e-d697-492e-b151-110e3673f549-kube-api-access-xg9qv\") pod \"whisker-68687457bd-6qht5\" (UID: \"142f1b5e-d697-492e-b151-110e3673f549\") " pod="calico-system/whisker-68687457bd-6qht5" Nov 8 00:21:43.122235 kubelet[2518]: I1108 00:21:43.122118 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da9743a8-c863-487c-b161-786bc9c10f6c-config-volume\") pod \"coredns-674b8bbfcf-htskj\" (UID: \"da9743a8-c863-487c-b161-786bc9c10f6c\") " pod="kube-system/coredns-674b8bbfcf-htskj" Nov 8 00:21:43.122235 kubelet[2518]: I1108 00:21:43.122155 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fchkc\" (UniqueName: \"kubernetes.io/projected/0eac0b9a-9cb8-40ad-80af-819a17da25f0-kube-api-access-fchkc\") pod \"calico-apiserver-5b76c59587-pzwfg\" (UID: \"0eac0b9a-9cb8-40ad-80af-819a17da25f0\") " pod="calico-apiserver/calico-apiserver-5b76c59587-pzwfg" Nov 8 00:21:43.122235 kubelet[2518]: I1108 00:21:43.122172 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tz9v\" (UniqueName: \"kubernetes.io/projected/ad60f746-d64e-4394-bbcc-99e4406b9d56-kube-api-access-4tz9v\") pod \"calico-apiserver-5b76c59587-szgkh\" (UID: \"ad60f746-d64e-4394-bbcc-99e4406b9d56\") " pod="calico-apiserver/calico-apiserver-5b76c59587-szgkh" Nov 8 00:21:43.122520 kubelet[2518]: I1108 00:21:43.122448 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcjcd\" (UniqueName: \"kubernetes.io/projected/5ca7b27d-c4bf-4555-ac93-a9fa936c758c-kube-api-access-gcjcd\") pod \"calico-kube-controllers-86f57bbc6c-bq2jn\" (UID: \"5ca7b27d-c4bf-4555-ac93-a9fa936c758c\") " pod="calico-system/calico-kube-controllers-86f57bbc6c-bq2jn" Nov 8 00:21:43.122520 kubelet[2518]: I1108 00:21:43.122470 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c05d744-421f-40ca-8faf-61db719dbbcd-goldmane-ca-bundle\") pod \"goldmane-666569f655-vwhc4\" (UID: \"2c05d744-421f-40ca-8faf-61db719dbbcd\") " pod="calico-system/goldmane-666569f655-vwhc4" Nov 8 00:21:43.122520 kubelet[2518]: I1108 00:21:43.122487 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtxfc\" (UniqueName: \"kubernetes.io/projected/2c05d744-421f-40ca-8faf-61db719dbbcd-kube-api-access-xtxfc\") pod \"goldmane-666569f655-vwhc4\" (UID: \"2c05d744-421f-40ca-8faf-61db719dbbcd\") " pod="calico-system/goldmane-666569f655-vwhc4" Nov 8 00:21:43.122520 kubelet[2518]: I1108 00:21:43.122505 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0eac0b9a-9cb8-40ad-80af-819a17da25f0-calico-apiserver-certs\") pod \"calico-apiserver-5b76c59587-pzwfg\" (UID: \"0eac0b9a-9cb8-40ad-80af-819a17da25f0\") " pod="calico-apiserver/calico-apiserver-5b76c59587-pzwfg" Nov 8 00:21:43.122520 kubelet[2518]: I1108 00:21:43.122519 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/142f1b5e-d697-492e-b151-110e3673f549-whisker-ca-bundle\") pod \"whisker-68687457bd-6qht5\" (UID: \"142f1b5e-d697-492e-b151-110e3673f549\") " pod="calico-system/whisker-68687457bd-6qht5" Nov 8 00:21:43.122689 kubelet[2518]: I1108 00:21:43.122534 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c05d744-421f-40ca-8faf-61db719dbbcd-config\") pod \"goldmane-666569f655-vwhc4\" (UID: \"2c05d744-421f-40ca-8faf-61db719dbbcd\") " pod="calico-system/goldmane-666569f655-vwhc4" Nov 8 00:21:43.122689 kubelet[2518]: I1108 00:21:43.122552 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl8qz\" (UniqueName: \"kubernetes.io/projected/da9743a8-c863-487c-b161-786bc9c10f6c-kube-api-access-fl8qz\") pod \"coredns-674b8bbfcf-htskj\" (UID: \"da9743a8-c863-487c-b161-786bc9c10f6c\") " pod="kube-system/coredns-674b8bbfcf-htskj" Nov 8 00:21:43.122689 kubelet[2518]: I1108 00:21:43.122570 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2c05d744-421f-40ca-8faf-61db719dbbcd-goldmane-key-pair\") pod \"goldmane-666569f655-vwhc4\" (UID: \"2c05d744-421f-40ca-8faf-61db719dbbcd\") " pod="calico-system/goldmane-666569f655-vwhc4" Nov 8 00:21:43.122689 kubelet[2518]: I1108 00:21:43.122586 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ad60f746-d64e-4394-bbcc-99e4406b9d56-calico-apiserver-certs\") pod \"calico-apiserver-5b76c59587-szgkh\" (UID: \"ad60f746-d64e-4394-bbcc-99e4406b9d56\") " pod="calico-apiserver/calico-apiserver-5b76c59587-szgkh" Nov 8 00:21:43.122689 kubelet[2518]: I1108 00:21:43.122601 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/142f1b5e-d697-492e-b151-110e3673f549-whisker-backend-key-pair\") pod \"whisker-68687457bd-6qht5\" (UID: \"142f1b5e-d697-492e-b151-110e3673f549\") " pod="calico-system/whisker-68687457bd-6qht5" Nov 8 00:21:43.122876 kubelet[2518]: I1108 00:21:43.122628 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ca7b27d-c4bf-4555-ac93-a9fa936c758c-tigera-ca-bundle\") pod \"calico-kube-controllers-86f57bbc6c-bq2jn\" (UID: \"5ca7b27d-c4bf-4555-ac93-a9fa936c758c\") " pod="calico-system/calico-kube-controllers-86f57bbc6c-bq2jn" Nov 8 00:21:43.321034 kubelet[2518]: E1108 00:21:43.320975 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:43.321613 containerd[1457]: time="2025-11-08T00:21:43.321545985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c5rwc,Uid:4c9a132b-c373-4aff-a37f-8a647d110275,Namespace:kube-system,Attempt:0,}" Nov 8 00:21:43.335369 containerd[1457]: time="2025-11-08T00:21:43.335309326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f57bbc6c-bq2jn,Uid:5ca7b27d-c4bf-4555-ac93-a9fa936c758c,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:43.351730 kubelet[2518]: E1108 00:21:43.351688 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:43.356791 containerd[1457]: time="2025-11-08T00:21:43.356760477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-htskj,Uid:da9743a8-c863-487c-b161-786bc9c10f6c,Namespace:kube-system,Attempt:0,}" Nov 8 00:21:43.358122 containerd[1457]: time="2025-11-08T00:21:43.358101797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b76c59587-szgkh,Uid:ad60f746-d64e-4394-bbcc-99e4406b9d56,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:21:43.370136 containerd[1457]: time="2025-11-08T00:21:43.370092929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vwhc4,Uid:2c05d744-421f-40ca-8faf-61db719dbbcd,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:43.375946 containerd[1457]: time="2025-11-08T00:21:43.375758059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68687457bd-6qht5,Uid:142f1b5e-d697-492e-b151-110e3673f549,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:43.380884 containerd[1457]: time="2025-11-08T00:21:43.380856566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b76c59587-pzwfg,Uid:0eac0b9a-9cb8-40ad-80af-819a17da25f0,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:21:43.468464 containerd[1457]: time="2025-11-08T00:21:43.468386601Z" level=error msg="Failed to destroy network for sandbox \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.473418 containerd[1457]: time="2025-11-08T00:21:43.472682199Z" level=error msg="encountered an error cleaning up failed sandbox \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.473418 containerd[1457]: time="2025-11-08T00:21:43.472750888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c5rwc,Uid:4c9a132b-c373-4aff-a37f-8a647d110275,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.474498 containerd[1457]: time="2025-11-08T00:21:43.474446273Z" level=error msg="Failed to destroy network for sandbox \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.474918 containerd[1457]: time="2025-11-08T00:21:43.474892170Z" level=error msg="encountered an error cleaning up failed sandbox \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.475024 containerd[1457]: time="2025-11-08T00:21:43.474945320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f57bbc6c-bq2jn,Uid:5ca7b27d-c4bf-4555-ac93-a9fa936c758c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.499581 kubelet[2518]: E1108 00:21:43.496565 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.499581 kubelet[2518]: E1108 00:21:43.496644 2518 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86f57bbc6c-bq2jn" Nov 8 00:21:43.499581 kubelet[2518]: E1108 00:21:43.496667 2518 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86f57bbc6c-bq2jn" Nov 8 00:21:43.499853 kubelet[2518]: E1108 00:21:43.496712 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86f57bbc6c-bq2jn_calico-system(5ca7b27d-c4bf-4555-ac93-a9fa936c758c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86f57bbc6c-bq2jn_calico-system(5ca7b27d-c4bf-4555-ac93-a9fa936c758c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86f57bbc6c-bq2jn" podUID="5ca7b27d-c4bf-4555-ac93-a9fa936c758c" Nov 8 00:21:43.499853 kubelet[2518]: E1108 00:21:43.496752 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.499853 kubelet[2518]: E1108 00:21:43.496769 2518 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-c5rwc" Nov 8 00:21:43.500094 kubelet[2518]: E1108 00:21:43.496780 2518 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-c5rwc" Nov 8 00:21:43.500094 kubelet[2518]: E1108 00:21:43.496835 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-c5rwc_kube-system(4c9a132b-c373-4aff-a37f-8a647d110275)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-c5rwc_kube-system(4c9a132b-c373-4aff-a37f-8a647d110275)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-c5rwc" podUID="4c9a132b-c373-4aff-a37f-8a647d110275" Nov 8 00:21:43.503002 containerd[1457]: time="2025-11-08T00:21:43.500642647Z" level=error msg="Failed to destroy network for sandbox \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.503002 containerd[1457]: time="2025-11-08T00:21:43.501172622Z" level=error msg="encountered an error cleaning up failed sandbox \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.503002 containerd[1457]: time="2025-11-08T00:21:43.501239027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b76c59587-szgkh,Uid:ad60f746-d64e-4394-bbcc-99e4406b9d56,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.503155 kubelet[2518]: E1108 00:21:43.501446 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.503155 kubelet[2518]: E1108 00:21:43.501492 2518 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b76c59587-szgkh" Nov 8 00:21:43.503155 kubelet[2518]: E1108 00:21:43.501510 2518 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b76c59587-szgkh" Nov 8 00:21:43.503242 kubelet[2518]: E1108 00:21:43.501546 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b76c59587-szgkh_calico-apiserver(ad60f746-d64e-4394-bbcc-99e4406b9d56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b76c59587-szgkh_calico-apiserver(ad60f746-d64e-4394-bbcc-99e4406b9d56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b76c59587-szgkh" podUID="ad60f746-d64e-4394-bbcc-99e4406b9d56" Nov 8 00:21:43.523695 containerd[1457]: time="2025-11-08T00:21:43.523646424Z" level=error msg="Failed to destroy network for sandbox \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.524052 containerd[1457]: time="2025-11-08T00:21:43.524030074Z" level=error msg="encountered an error cleaning up failed sandbox \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.524098 containerd[1457]: time="2025-11-08T00:21:43.524073526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68687457bd-6qht5,Uid:142f1b5e-d697-492e-b151-110e3673f549,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.525321 kubelet[2518]: E1108 00:21:43.524294 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.525321 kubelet[2518]: E1108 00:21:43.524373 2518 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68687457bd-6qht5" Nov 8 00:21:43.525321 kubelet[2518]: E1108 00:21:43.524395 2518 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68687457bd-6qht5" Nov 8 00:21:43.525450 kubelet[2518]: E1108 00:21:43.524451 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-68687457bd-6qht5_calico-system(142f1b5e-d697-492e-b151-110e3673f549)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-68687457bd-6qht5_calico-system(142f1b5e-d697-492e-b151-110e3673f549)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68687457bd-6qht5" podUID="142f1b5e-d697-492e-b151-110e3673f549" Nov 8 00:21:43.535567 containerd[1457]: time="2025-11-08T00:21:43.535500078Z" level=error msg="Failed to destroy network for sandbox \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.536525 containerd[1457]: time="2025-11-08T00:21:43.536481011Z" level=error msg="encountered an error cleaning up failed sandbox \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.536799 containerd[1457]: time="2025-11-08T00:21:43.536777177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-htskj,Uid:da9743a8-c863-487c-b161-786bc9c10f6c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.538993 kubelet[2518]: E1108 00:21:43.537259 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.538993 kubelet[2518]: E1108 00:21:43.537327 2518 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-htskj" Nov 8 00:21:43.538993 kubelet[2518]: E1108 00:21:43.537365 2518 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-htskj" Nov 8 00:21:43.539128 kubelet[2518]: E1108 00:21:43.537412 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-htskj_kube-system(da9743a8-c863-487c-b161-786bc9c10f6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-htskj_kube-system(da9743a8-c863-487c-b161-786bc9c10f6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-htskj" podUID="da9743a8-c863-487c-b161-786bc9c10f6c" Nov 8 00:21:43.541203 containerd[1457]: time="2025-11-08T00:21:43.541165168Z" level=error msg="Failed to destroy network for sandbox \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.541563 containerd[1457]: time="2025-11-08T00:21:43.541534463Z" level=error msg="encountered an error cleaning up failed sandbox \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.541711 containerd[1457]: time="2025-11-08T00:21:43.541677271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vwhc4,Uid:2c05d744-421f-40ca-8faf-61db719dbbcd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.542212 kubelet[2518]: E1108 00:21:43.542175 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.542274 kubelet[2518]: E1108 00:21:43.542236 2518 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vwhc4" Nov 8 00:21:43.542274 kubelet[2518]: E1108 00:21:43.542261 2518 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vwhc4" Nov 8 00:21:43.542334 kubelet[2518]: E1108 00:21:43.542309 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-vwhc4_calico-system(2c05d744-421f-40ca-8faf-61db719dbbcd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-vwhc4_calico-system(2c05d744-421f-40ca-8faf-61db719dbbcd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vwhc4" podUID="2c05d744-421f-40ca-8faf-61db719dbbcd" Nov 8 00:21:43.547767 containerd[1457]: time="2025-11-08T00:21:43.547736632Z" level=error msg="Failed to destroy network for sandbox \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.548084 containerd[1457]: time="2025-11-08T00:21:43.548055651Z" level=error msg="encountered an error cleaning up failed sandbox \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.548115 containerd[1457]: time="2025-11-08T00:21:43.548092610Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b76c59587-pzwfg,Uid:0eac0b9a-9cb8-40ad-80af-819a17da25f0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.548312 kubelet[2518]: E1108 00:21:43.548269 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.548372 kubelet[2518]: E1108 00:21:43.548334 2518 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b76c59587-pzwfg" Nov 8 00:21:43.548372 kubelet[2518]: E1108 00:21:43.548365 2518 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b76c59587-pzwfg" Nov 8 00:21:43.548443 kubelet[2518]: E1108 00:21:43.548417 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b76c59587-pzwfg_calico-apiserver(0eac0b9a-9cb8-40ad-80af-819a17da25f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b76c59587-pzwfg_calico-apiserver(0eac0b9a-9cb8-40ad-80af-819a17da25f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b76c59587-pzwfg" podUID="0eac0b9a-9cb8-40ad-80af-819a17da25f0" Nov 8 00:21:43.914946 kubelet[2518]: I1108 00:21:43.914896 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Nov 8 00:21:43.924057 containerd[1457]: time="2025-11-08T00:21:43.923996329Z" level=info msg="StopPodSandbox for \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\"" Nov 8 00:21:43.933612 containerd[1457]: time="2025-11-08T00:21:43.933543162Z" level=info msg="Ensure that sandbox 904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b in task-service has been cleanup successfully" Nov 8 00:21:43.943294 kubelet[2518]: I1108 00:21:43.943251 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Nov 8 00:21:43.945456 containerd[1457]: time="2025-11-08T00:21:43.944336354Z" level=info msg="StopPodSandbox for \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\"" Nov 8 00:21:43.945456 containerd[1457]: time="2025-11-08T00:21:43.944582025Z" level=info msg="Ensure that sandbox 4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0 in task-service has been cleanup successfully" Nov 8 00:21:43.946326 kubelet[2518]: I1108 00:21:43.946300 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Nov 8 00:21:43.946872 containerd[1457]: time="2025-11-08T00:21:43.946799190Z" level=info msg="StopPodSandbox for \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\"" Nov 8 00:21:43.947269 containerd[1457]: time="2025-11-08T00:21:43.947155859Z" level=info msg="Ensure that sandbox 169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc in task-service has been cleanup successfully" Nov 8 00:21:43.949541 kubelet[2518]: I1108 00:21:43.949507 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Nov 8 00:21:43.950332 containerd[1457]: time="2025-11-08T00:21:43.950267604Z" level=info msg="StopPodSandbox for \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\"" Nov 8 00:21:43.950980 containerd[1457]: time="2025-11-08T00:21:43.950740623Z" level=info msg="Ensure that sandbox 193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965 in task-service has been cleanup successfully" Nov 8 00:21:43.965913 containerd[1457]: time="2025-11-08T00:21:43.965858338Z" level=error msg="StopPodSandbox for \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\" failed" error="failed to destroy network for sandbox \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:43.966084 kubelet[2518]: E1108 00:21:43.966032 2518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Nov 8 00:21:43.966145 kubelet[2518]: E1108 00:21:43.966087 2518 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b"} Nov 8 00:21:43.966179 kubelet[2518]: E1108 00:21:43.966151 2518 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2c05d744-421f-40ca-8faf-61db719dbbcd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:43.966274 kubelet[2518]: E1108 00:21:43.966184 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2c05d744-421f-40ca-8faf-61db719dbbcd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vwhc4" podUID="2c05d744-421f-40ca-8faf-61db719dbbcd" Nov 8 00:21:43.968526 kubelet[2518]: I1108 00:21:43.966781 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Nov 8 00:21:43.968646 containerd[1457]: time="2025-11-08T00:21:43.968332716Z" level=info msg="StopPodSandbox for \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\"" Nov 8 00:21:43.968646 containerd[1457]: time="2025-11-08T00:21:43.968584287Z" level=info msg="Ensure that sandbox 7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59 in task-service has been cleanup successfully" Nov 8 00:21:43.971279 kubelet[2518]: E1108 00:21:43.971239 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:43.973091 containerd[1457]: time="2025-11-08T00:21:43.972887311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:21:43.975316 kubelet[2518]: I1108 00:21:43.975019 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Nov 8 00:21:43.975564 containerd[1457]: time="2025-11-08T00:21:43.975519183Z" level=info msg="StopPodSandbox for \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\"" Nov 8 00:21:43.975832 containerd[1457]: time="2025-11-08T00:21:43.975771107Z" level=info msg="Ensure that sandbox e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5 in task-service has been cleanup successfully" Nov 8 00:21:43.981958 kubelet[2518]: I1108 00:21:43.981926 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Nov 8 00:21:43.982829 kubelet[2518]: E1108 00:21:43.982751 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:43.985262 containerd[1457]: time="2025-11-08T00:21:43.984256314Z" level=info msg="StopPodSandbox for \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\"" Nov 8 00:21:43.988951 containerd[1457]: time="2025-11-08T00:21:43.988924222Z" level=info msg="Ensure that sandbox 56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe in task-service has been cleanup successfully" Nov 8 00:21:44.026066 containerd[1457]: time="2025-11-08T00:21:44.025996920Z" level=error msg="StopPodSandbox for \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\" failed" error="failed to destroy network for sandbox \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:44.026688 kubelet[2518]: E1108 00:21:44.026643 2518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Nov 8 00:21:44.026779 kubelet[2518]: E1108 00:21:44.026699 2518 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965"} Nov 8 00:21:44.026779 kubelet[2518]: E1108 00:21:44.026734 2518 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5ca7b27d-c4bf-4555-ac93-a9fa936c758c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:44.026779 kubelet[2518]: E1108 00:21:44.026758 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5ca7b27d-c4bf-4555-ac93-a9fa936c758c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86f57bbc6c-bq2jn" podUID="5ca7b27d-c4bf-4555-ac93-a9fa936c758c" Nov 8 00:21:44.028854 containerd[1457]: time="2025-11-08T00:21:44.028794784Z" level=error msg="StopPodSandbox for \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\" failed" error="failed to destroy network for sandbox \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:44.030175 kubelet[2518]: E1108 00:21:44.030115 2518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Nov 8 00:21:44.030175 kubelet[2518]: E1108 00:21:44.030176 2518 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0"} Nov 8 00:21:44.030351 kubelet[2518]: E1108 00:21:44.030200 2518 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ad60f746-d64e-4394-bbcc-99e4406b9d56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:44.030351 kubelet[2518]: E1108 00:21:44.030221 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ad60f746-d64e-4394-bbcc-99e4406b9d56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b76c59587-szgkh" podUID="ad60f746-d64e-4394-bbcc-99e4406b9d56" Nov 8 00:21:44.040653 containerd[1457]: time="2025-11-08T00:21:44.040595606Z" level=error msg="StopPodSandbox for \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\" failed" error="failed to destroy network for sandbox \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:44.041848 kubelet[2518]: E1108 00:21:44.040823 2518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Nov 8 00:21:44.041848 kubelet[2518]: E1108 00:21:44.040876 2518 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc"} Nov 8 00:21:44.041848 kubelet[2518]: E1108 00:21:44.040904 2518 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da9743a8-c863-487c-b161-786bc9c10f6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:44.041848 kubelet[2518]: E1108 00:21:44.040923 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da9743a8-c863-487c-b161-786bc9c10f6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-htskj" podUID="da9743a8-c863-487c-b161-786bc9c10f6c" Nov 8 00:21:44.051151 containerd[1457]: time="2025-11-08T00:21:44.051091569Z" level=error msg="StopPodSandbox for \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\" failed" error="failed to destroy network for sandbox \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:44.051550 kubelet[2518]: E1108 00:21:44.051496 2518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Nov 8 00:21:44.051550 kubelet[2518]: E1108 00:21:44.051536 2518 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5"} Nov 8 00:21:44.051626 kubelet[2518]: E1108 00:21:44.051560 2518 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0eac0b9a-9cb8-40ad-80af-819a17da25f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:44.051626 kubelet[2518]: E1108 00:21:44.051580 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0eac0b9a-9cb8-40ad-80af-819a17da25f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b76c59587-pzwfg" podUID="0eac0b9a-9cb8-40ad-80af-819a17da25f0" Nov 8 00:21:44.055134 containerd[1457]: time="2025-11-08T00:21:44.055084378Z" level=error msg="StopPodSandbox for \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\" failed" error="failed to destroy network for sandbox \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:44.055382 kubelet[2518]: E1108 00:21:44.055308 2518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Nov 8 00:21:44.055528 kubelet[2518]: E1108 00:21:44.055392 2518 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59"} Nov 8 00:21:44.055528 kubelet[2518]: E1108 00:21:44.055442 2518 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c9a132b-c373-4aff-a37f-8a647d110275\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:44.055528 kubelet[2518]: E1108 00:21:44.055482 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c9a132b-c373-4aff-a37f-8a647d110275\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-c5rwc" podUID="4c9a132b-c373-4aff-a37f-8a647d110275" Nov 8 00:21:44.061174 containerd[1457]: time="2025-11-08T00:21:44.061067534Z" level=error msg="StopPodSandbox for \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\" failed" error="failed to destroy network for sandbox \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:44.061280 kubelet[2518]: E1108 00:21:44.061246 2518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Nov 8 00:21:44.061359 kubelet[2518]: E1108 00:21:44.061287 2518 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe"} Nov 8 00:21:44.061359 kubelet[2518]: E1108 00:21:44.061319 2518 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"142f1b5e-d697-492e-b151-110e3673f549\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:44.061472 kubelet[2518]: E1108 00:21:44.061356 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"142f1b5e-d697-492e-b151-110e3673f549\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68687457bd-6qht5" podUID="142f1b5e-d697-492e-b151-110e3673f549" Nov 8 00:21:44.850200 systemd[1]: Created slice kubepods-besteffort-pod6fb889d5_2903_4e6b_a458_6fb9eecb4dcd.slice - libcontainer container kubepods-besteffort-pod6fb889d5_2903_4e6b_a458_6fb9eecb4dcd.slice. Nov 8 00:21:44.852454 containerd[1457]: time="2025-11-08T00:21:44.852416571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-67lwd,Uid:6fb889d5-2903-4e6b-a458-6fb9eecb4dcd,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:44.920452 containerd[1457]: time="2025-11-08T00:21:44.920407282Z" level=error msg="Failed to destroy network for sandbox \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:44.920798 containerd[1457]: time="2025-11-08T00:21:44.920765595Z" level=error msg="encountered an error cleaning up failed sandbox \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:44.920863 containerd[1457]: time="2025-11-08T00:21:44.920825798Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-67lwd,Uid:6fb889d5-2903-4e6b-a458-6fb9eecb4dcd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:44.921106 kubelet[2518]: E1108 00:21:44.921058 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:44.921703 kubelet[2518]: E1108 00:21:44.921539 2518 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-67lwd" Nov 8 00:21:44.921703 kubelet[2518]: E1108 00:21:44.921589 2518 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-67lwd" Nov 8 00:21:44.921703 kubelet[2518]: E1108 00:21:44.921655 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-67lwd_calico-system(6fb889d5-2903-4e6b-a458-6fb9eecb4dcd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-67lwd_calico-system(6fb889d5-2903-4e6b-a458-6fb9eecb4dcd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-67lwd" podUID="6fb889d5-2903-4e6b-a458-6fb9eecb4dcd" Nov 8 00:21:44.923416 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19-shm.mount: Deactivated successfully. Nov 8 00:21:44.984017 kubelet[2518]: I1108 00:21:44.983979 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Nov 8 00:21:44.984755 containerd[1457]: time="2025-11-08T00:21:44.984721267Z" level=info msg="StopPodSandbox for \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\"" Nov 8 00:21:44.985154 containerd[1457]: time="2025-11-08T00:21:44.984909341Z" level=info msg="Ensure that sandbox a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19 in task-service has been cleanup successfully" Nov 8 00:21:45.007789 containerd[1457]: time="2025-11-08T00:21:45.007741911Z" level=error msg="StopPodSandbox for \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\" failed" error="failed to destroy network for sandbox \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:45.008049 kubelet[2518]: E1108 00:21:45.007994 2518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Nov 8 00:21:45.008108 kubelet[2518]: E1108 00:21:45.008057 2518 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19"} Nov 8 00:21:45.008108 kubelet[2518]: E1108 00:21:45.008093 2518 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6fb889d5-2903-4e6b-a458-6fb9eecb4dcd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:45.008230 kubelet[2518]: E1108 00:21:45.008118 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6fb889d5-2903-4e6b-a458-6fb9eecb4dcd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-67lwd" podUID="6fb889d5-2903-4e6b-a458-6fb9eecb4dcd" Nov 8 00:21:50.187667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2110298283.mount: Deactivated successfully. Nov 8 00:21:51.281426 containerd[1457]: time="2025-11-08T00:21:51.281350403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:51.282516 containerd[1457]: time="2025-11-08T00:21:51.282475194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:21:51.286042 containerd[1457]: time="2025-11-08T00:21:51.286007384Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:51.288438 containerd[1457]: time="2025-11-08T00:21:51.288390237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:51.288953 containerd[1457]: time="2025-11-08T00:21:51.288891327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.315958792s" Nov 8 00:21:51.288953 containerd[1457]: time="2025-11-08T00:21:51.288939528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:21:51.306978 containerd[1457]: time="2025-11-08T00:21:51.306929654Z" level=info msg="CreateContainer within sandbox \"eccd74d42cdcc56b1a84ca74b3bb76c6761d9fe474e6d9ffcec987e70ec6ef6f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:21:51.328449 containerd[1457]: time="2025-11-08T00:21:51.328391918Z" level=info msg="CreateContainer within sandbox \"eccd74d42cdcc56b1a84ca74b3bb76c6761d9fe474e6d9ffcec987e70ec6ef6f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c75487f53068293ea0737837b5896b7bd4004b566b1576055606cdd2b6899d22\"" Nov 8 00:21:51.329016 containerd[1457]: time="2025-11-08T00:21:51.328978669Z" level=info msg="StartContainer for \"c75487f53068293ea0737837b5896b7bd4004b566b1576055606cdd2b6899d22\"" Nov 8 00:21:51.381062 systemd[1]: Started cri-containerd-c75487f53068293ea0737837b5896b7bd4004b566b1576055606cdd2b6899d22.scope - libcontainer container c75487f53068293ea0737837b5896b7bd4004b566b1576055606cdd2b6899d22. Nov 8 00:21:51.709453 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:21:51.710384 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:21:52.185597 containerd[1457]: time="2025-11-08T00:21:52.185520775Z" level=info msg="StartContainer for \"c75487f53068293ea0737837b5896b7bd4004b566b1576055606cdd2b6899d22\" returns successfully" Nov 8 00:21:52.244325 containerd[1457]: time="2025-11-08T00:21:52.244268775Z" level=info msg="StopPodSandbox for \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\"" Nov 8 00:21:52.414928 containerd[1457]: 2025-11-08 00:21:52.315 [INFO][3823] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Nov 8 00:21:52.414928 containerd[1457]: 2025-11-08 00:21:52.315 [INFO][3823] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" iface="eth0" netns="/var/run/netns/cni-2d1efee9-efa9-0674-d69e-2e8428b77fb2" Nov 8 00:21:52.414928 containerd[1457]: 2025-11-08 00:21:52.316 [INFO][3823] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" iface="eth0" netns="/var/run/netns/cni-2d1efee9-efa9-0674-d69e-2e8428b77fb2" Nov 8 00:21:52.414928 containerd[1457]: 2025-11-08 00:21:52.317 [INFO][3823] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" iface="eth0" netns="/var/run/netns/cni-2d1efee9-efa9-0674-d69e-2e8428b77fb2" Nov 8 00:21:52.414928 containerd[1457]: 2025-11-08 00:21:52.317 [INFO][3823] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Nov 8 00:21:52.414928 containerd[1457]: 2025-11-08 00:21:52.317 [INFO][3823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Nov 8 00:21:52.414928 containerd[1457]: 2025-11-08 00:21:52.400 [INFO][3839] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" HandleID="k8s-pod-network.56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Workload="localhost-k8s-whisker--68687457bd--6qht5-eth0" Nov 8 00:21:52.414928 containerd[1457]: 2025-11-08 00:21:52.401 [INFO][3839] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:52.414928 containerd[1457]: 2025-11-08 00:21:52.401 [INFO][3839] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:52.414928 containerd[1457]: 2025-11-08 00:21:52.407 [WARNING][3839] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" HandleID="k8s-pod-network.56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Workload="localhost-k8s-whisker--68687457bd--6qht5-eth0" Nov 8 00:21:52.414928 containerd[1457]: 2025-11-08 00:21:52.407 [INFO][3839] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" HandleID="k8s-pod-network.56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Workload="localhost-k8s-whisker--68687457bd--6qht5-eth0" Nov 8 00:21:52.414928 containerd[1457]: 2025-11-08 00:21:52.408 [INFO][3839] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:52.414928 containerd[1457]: 2025-11-08 00:21:52.411 [INFO][3823] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Nov 8 00:21:52.416684 containerd[1457]: time="2025-11-08T00:21:52.415543849Z" level=info msg="TearDown network for sandbox \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\" successfully" Nov 8 00:21:52.416684 containerd[1457]: time="2025-11-08T00:21:52.415571631Z" level=info msg="StopPodSandbox for \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\" returns successfully" Nov 8 00:21:52.417737 systemd[1]: run-netns-cni\x2d2d1efee9\x2defa9\x2d0674\x2dd69e\x2d2e8428b77fb2.mount: Deactivated successfully. Nov 8 00:21:52.487004 kubelet[2518]: I1108 00:21:52.486849 2518 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/142f1b5e-d697-492e-b151-110e3673f549-whisker-ca-bundle\") pod \"142f1b5e-d697-492e-b151-110e3673f549\" (UID: \"142f1b5e-d697-492e-b151-110e3673f549\") " Nov 8 00:21:52.487004 kubelet[2518]: I1108 00:21:52.486918 2518 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/142f1b5e-d697-492e-b151-110e3673f549-whisker-backend-key-pair\") pod \"142f1b5e-d697-492e-b151-110e3673f549\" (UID: \"142f1b5e-d697-492e-b151-110e3673f549\") " Nov 8 00:21:52.487004 kubelet[2518]: I1108 00:21:52.486944 2518 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg9qv\" (UniqueName: \"kubernetes.io/projected/142f1b5e-d697-492e-b151-110e3673f549-kube-api-access-xg9qv\") pod \"142f1b5e-d697-492e-b151-110e3673f549\" (UID: \"142f1b5e-d697-492e-b151-110e3673f549\") " Nov 8 00:21:52.488630 kubelet[2518]: I1108 00:21:52.488587 2518 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/142f1b5e-d697-492e-b151-110e3673f549-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "142f1b5e-d697-492e-b151-110e3673f549" (UID: "142f1b5e-d697-492e-b151-110e3673f549"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:21:52.493941 kubelet[2518]: I1108 00:21:52.493886 2518 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/142f1b5e-d697-492e-b151-110e3673f549-kube-api-access-xg9qv" (OuterVolumeSpecName: "kube-api-access-xg9qv") pod "142f1b5e-d697-492e-b151-110e3673f549" (UID: "142f1b5e-d697-492e-b151-110e3673f549"). InnerVolumeSpecName "kube-api-access-xg9qv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:21:52.494079 kubelet[2518]: I1108 00:21:52.493886 2518 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/142f1b5e-d697-492e-b151-110e3673f549-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "142f1b5e-d697-492e-b151-110e3673f549" (UID: "142f1b5e-d697-492e-b151-110e3673f549"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:21:52.495861 systemd[1]: var-lib-kubelet-pods-142f1b5e\x2dd697\x2d492e\x2db151\x2d110e3673f549-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxg9qv.mount: Deactivated successfully. Nov 8 00:21:52.495989 systemd[1]: var-lib-kubelet-pods-142f1b5e\x2dd697\x2d492e\x2db151\x2d110e3673f549-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:21:52.588172 kubelet[2518]: I1108 00:21:52.588083 2518 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/142f1b5e-d697-492e-b151-110e3673f549-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 8 00:21:52.588172 kubelet[2518]: I1108 00:21:52.588135 2518 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xg9qv\" (UniqueName: \"kubernetes.io/projected/142f1b5e-d697-492e-b151-110e3673f549-kube-api-access-xg9qv\") on node \"localhost\" DevicePath \"\"" Nov 8 00:21:52.588172 kubelet[2518]: I1108 00:21:52.588148 2518 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/142f1b5e-d697-492e-b151-110e3673f549-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 8 00:21:52.853354 systemd[1]: Removed slice kubepods-besteffort-pod142f1b5e_d697_492e_b151_110e3673f549.slice - libcontainer container kubepods-besteffort-pod142f1b5e_d697_492e_b151_110e3673f549.slice. Nov 8 00:21:53.200436 kubelet[2518]: E1108 00:21:53.200248 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:53.468939 kubelet[2518]: I1108 00:21:53.467590 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qgw9l" podStartSLOduration=3.188971768 podStartE2EDuration="21.467569349s" podCreationTimestamp="2025-11-08 00:21:32 +0000 UTC" firstStartedPulling="2025-11-08 00:21:33.013908441 +0000 UTC m=+18.241714314" lastFinishedPulling="2025-11-08 00:21:51.292506022 +0000 UTC m=+36.520311895" observedRunningTime="2025-11-08 00:21:53.465504633 +0000 UTC m=+38.693310526" watchObservedRunningTime="2025-11-08 00:21:53.467569349 +0000 UTC m=+38.695375242" Nov 8 00:21:54.853638 containerd[1457]: time="2025-11-08T00:21:54.846150671Z" level=info msg="StopPodSandbox for \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\"" Nov 8 00:21:54.853638 containerd[1457]: time="2025-11-08T00:21:54.851142789Z" level=info msg="StopPodSandbox for \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\"" Nov 8 00:21:54.869077 kubelet[2518]: I1108 00:21:54.859712 2518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="142f1b5e-d697-492e-b151-110e3673f549" path="/var/lib/kubelet/pods/142f1b5e-d697-492e-b151-110e3673f549/volumes" Nov 8 00:21:55.844876 containerd[1457]: time="2025-11-08T00:21:55.844797905Z" level=info msg="StopPodSandbox for \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\"" Nov 8 00:21:56.648309 systemd[1]: Started sshd@7-10.0.0.49:22-10.0.0.1:34012.service - OpenSSH per-connection server daemon (10.0.0.1:34012). Nov 8 00:21:56.845639 containerd[1457]: time="2025-11-08T00:21:56.845224834Z" level=info msg="StopPodSandbox for \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\"" Nov 8 00:21:56.853310 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 34012 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:21:56.856697 systemd[1]: Created slice kubepods-besteffort-pod8afc460d_c7d5_4574_a129_acae64d116ee.slice - libcontainer container kubepods-besteffort-pod8afc460d_c7d5_4574_a129_acae64d116ee.slice. Nov 8 00:21:56.857998 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:56.868481 systemd-logind[1447]: New session 8 of user core. Nov 8 00:21:56.875024 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:21:56.879667 containerd[1457]: 2025-11-08 00:21:56.634 [INFO][4001] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Nov 8 00:21:56.879667 containerd[1457]: 2025-11-08 00:21:56.650 [INFO][4001] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" iface="eth0" netns="/var/run/netns/cni-e7a618d6-678e-577c-aa2e-495c0ddd9128" Nov 8 00:21:56.879667 containerd[1457]: 2025-11-08 00:21:56.651 [INFO][4001] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" iface="eth0" netns="/var/run/netns/cni-e7a618d6-678e-577c-aa2e-495c0ddd9128" Nov 8 00:21:56.879667 containerd[1457]: 2025-11-08 00:21:56.652 [INFO][4001] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" iface="eth0" netns="/var/run/netns/cni-e7a618d6-678e-577c-aa2e-495c0ddd9128" Nov 8 00:21:56.879667 containerd[1457]: 2025-11-08 00:21:56.652 [INFO][4001] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Nov 8 00:21:56.879667 containerd[1457]: 2025-11-08 00:21:56.652 [INFO][4001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Nov 8 00:21:56.879667 containerd[1457]: 2025-11-08 00:21:56.819 [INFO][4047] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" HandleID="k8s-pod-network.193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Workload="localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0" Nov 8 00:21:56.879667 containerd[1457]: 2025-11-08 00:21:56.819 [INFO][4047] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:56.879667 containerd[1457]: 2025-11-08 00:21:56.819 [INFO][4047] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:56.879667 containerd[1457]: 2025-11-08 00:21:56.851 [WARNING][4047] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" HandleID="k8s-pod-network.193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Workload="localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0" Nov 8 00:21:56.879667 containerd[1457]: 2025-11-08 00:21:56.851 [INFO][4047] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" HandleID="k8s-pod-network.193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Workload="localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0" Nov 8 00:21:56.879667 containerd[1457]: 2025-11-08 00:21:56.863 [INFO][4047] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:56.879667 containerd[1457]: 2025-11-08 00:21:56.874 [INFO][4001] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Nov 8 00:21:56.883888 containerd[1457]: time="2025-11-08T00:21:56.882911050Z" level=info msg="TearDown network for sandbox \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\" successfully" Nov 8 00:21:56.883888 containerd[1457]: time="2025-11-08T00:21:56.882947538Z" level=info msg="StopPodSandbox for \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\" returns successfully" Nov 8 00:21:56.886424 containerd[1457]: time="2025-11-08T00:21:56.884230035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f57bbc6c-bq2jn,Uid:5ca7b27d-c4bf-4555-ac93-a9fa936c758c,Namespace:calico-system,Attempt:1,}" Nov 8 00:21:56.885027 systemd[1]: run-netns-cni\x2de7a618d6\x2d678e\x2d577c\x2daa2e\x2d495c0ddd9128.mount: Deactivated successfully. Nov 8 00:21:56.892869 kernel: bpftool[4098]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:21:56.934613 kubelet[2518]: I1108 00:21:56.934476 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8afc460d-c7d5-4574-a129-acae64d116ee-whisker-ca-bundle\") pod \"whisker-84655bbcf4-42fp6\" (UID: \"8afc460d-c7d5-4574-a129-acae64d116ee\") " pod="calico-system/whisker-84655bbcf4-42fp6" Nov 8 00:21:56.934613 kubelet[2518]: I1108 00:21:56.934552 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqdcq\" (UniqueName: \"kubernetes.io/projected/8afc460d-c7d5-4574-a129-acae64d116ee-kube-api-access-cqdcq\") pod \"whisker-84655bbcf4-42fp6\" (UID: \"8afc460d-c7d5-4574-a129-acae64d116ee\") " pod="calico-system/whisker-84655bbcf4-42fp6" Nov 8 00:21:56.934613 kubelet[2518]: I1108 00:21:56.934579 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8afc460d-c7d5-4574-a129-acae64d116ee-whisker-backend-key-pair\") pod \"whisker-84655bbcf4-42fp6\" (UID: \"8afc460d-c7d5-4574-a129-acae64d116ee\") " pod="calico-system/whisker-84655bbcf4-42fp6" Nov 8 00:21:57.193945 containerd[1457]: 2025-11-08 00:21:56.632 [INFO][4009] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Nov 8 00:21:57.193945 containerd[1457]: 2025-11-08 00:21:56.633 [INFO][4009] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" iface="eth0" netns="/var/run/netns/cni-ee5c66d7-dfaf-586f-ba68-edefce2e28aa" Nov 8 00:21:57.193945 containerd[1457]: 2025-11-08 00:21:56.650 [INFO][4009] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" iface="eth0" netns="/var/run/netns/cni-ee5c66d7-dfaf-586f-ba68-edefce2e28aa" Nov 8 00:21:57.193945 containerd[1457]: 2025-11-08 00:21:56.650 [INFO][4009] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" iface="eth0" netns="/var/run/netns/cni-ee5c66d7-dfaf-586f-ba68-edefce2e28aa" Nov 8 00:21:57.193945 containerd[1457]: 2025-11-08 00:21:56.650 [INFO][4009] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Nov 8 00:21:57.193945 containerd[1457]: 2025-11-08 00:21:56.650 [INFO][4009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Nov 8 00:21:57.193945 containerd[1457]: 2025-11-08 00:21:56.819 [INFO][4045] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" HandleID="k8s-pod-network.904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Workload="localhost-k8s-goldmane--666569f655--vwhc4-eth0" Nov 8 00:21:57.193945 containerd[1457]: 2025-11-08 00:21:56.819 [INFO][4045] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:57.193945 containerd[1457]: 2025-11-08 00:21:56.863 [INFO][4045] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:57.193945 containerd[1457]: 2025-11-08 00:21:57.178 [WARNING][4045] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" HandleID="k8s-pod-network.904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Workload="localhost-k8s-goldmane--666569f655--vwhc4-eth0" Nov 8 00:21:57.193945 containerd[1457]: 2025-11-08 00:21:57.179 [INFO][4045] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" HandleID="k8s-pod-network.904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Workload="localhost-k8s-goldmane--666569f655--vwhc4-eth0" Nov 8 00:21:57.193945 containerd[1457]: 2025-11-08 00:21:57.182 [INFO][4045] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:57.193945 containerd[1457]: 2025-11-08 00:21:57.188 [INFO][4009] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Nov 8 00:21:57.193945 containerd[1457]: time="2025-11-08T00:21:57.193872064Z" level=info msg="TearDown network for sandbox \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\" successfully" Nov 8 00:21:57.194959 containerd[1457]: time="2025-11-08T00:21:57.194510442Z" level=info msg="StopPodSandbox for \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\" returns successfully" Nov 8 00:21:57.195592 containerd[1457]: time="2025-11-08T00:21:57.195556775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vwhc4,Uid:2c05d744-421f-40ca-8faf-61db719dbbcd,Namespace:calico-system,Attempt:1,}" Nov 8 00:21:57.197623 systemd[1]: run-netns-cni\x2dee5c66d7\x2ddfaf\x2d586f\x2dba68\x2dedefce2e28aa.mount: Deactivated successfully. Nov 8 00:21:57.279283 systemd-networkd[1388]: vxlan.calico: Link UP Nov 8 00:21:57.279351 systemd-networkd[1388]: vxlan.calico: Gained carrier Nov 8 00:21:57.464364 containerd[1457]: time="2025-11-08T00:21:57.464235553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84655bbcf4-42fp6,Uid:8afc460d-c7d5-4574-a129-acae64d116ee,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:57.611716 containerd[1457]: 2025-11-08 00:21:56.717 [INFO][4033] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Nov 8 00:21:57.611716 containerd[1457]: 2025-11-08 00:21:56.717 [INFO][4033] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" iface="eth0" netns="/var/run/netns/cni-1b500a29-074a-52d7-ba03-5311a807a0ff" Nov 8 00:21:57.611716 containerd[1457]: 2025-11-08 00:21:56.719 [INFO][4033] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" iface="eth0" netns="/var/run/netns/cni-1b500a29-074a-52d7-ba03-5311a807a0ff" Nov 8 00:21:57.611716 containerd[1457]: 2025-11-08 00:21:56.740 [INFO][4033] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" iface="eth0" netns="/var/run/netns/cni-1b500a29-074a-52d7-ba03-5311a807a0ff" Nov 8 00:21:57.611716 containerd[1457]: 2025-11-08 00:21:56.740 [INFO][4033] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Nov 8 00:21:57.611716 containerd[1457]: 2025-11-08 00:21:56.740 [INFO][4033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Nov 8 00:21:57.611716 containerd[1457]: 2025-11-08 00:21:56.830 [INFO][4061] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" HandleID="k8s-pod-network.169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Workload="localhost-k8s-coredns--674b8bbfcf--htskj-eth0" Nov 8 00:21:57.611716 containerd[1457]: 2025-11-08 00:21:56.830 [INFO][4061] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:57.611716 containerd[1457]: 2025-11-08 00:21:57.183 [INFO][4061] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:57.611716 containerd[1457]: 2025-11-08 00:21:57.263 [WARNING][4061] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" HandleID="k8s-pod-network.169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Workload="localhost-k8s-coredns--674b8bbfcf--htskj-eth0" Nov 8 00:21:57.611716 containerd[1457]: 2025-11-08 00:21:57.263 [INFO][4061] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" HandleID="k8s-pod-network.169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Workload="localhost-k8s-coredns--674b8bbfcf--htskj-eth0" Nov 8 00:21:57.611716 containerd[1457]: 2025-11-08 00:21:57.605 [INFO][4061] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:57.611716 containerd[1457]: 2025-11-08 00:21:57.608 [INFO][4033] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Nov 8 00:21:57.612753 containerd[1457]: time="2025-11-08T00:21:57.612431507Z" level=info msg="TearDown network for sandbox \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\" successfully" Nov 8 00:21:57.612753 containerd[1457]: time="2025-11-08T00:21:57.612484867Z" level=info msg="StopPodSandbox for \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\" returns successfully" Nov 8 00:21:57.613221 kubelet[2518]: E1108 00:21:57.613196 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:57.613915 containerd[1457]: time="2025-11-08T00:21:57.613840491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-htskj,Uid:da9743a8-c863-487c-b161-786bc9c10f6c,Namespace:kube-system,Attempt:1,}" Nov 8 00:21:57.615939 systemd[1]: run-netns-cni\x2d1b500a29\x2d074a\x2d52d7\x2dba03\x2d5311a807a0ff.mount: Deactivated successfully. Nov 8 00:21:57.627997 sshd[4043]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:57.629984 containerd[1457]: 2025-11-08 00:21:57.265 [INFO][4084] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Nov 8 00:21:57.629984 containerd[1457]: 2025-11-08 00:21:57.265 [INFO][4084] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" iface="eth0" netns="/var/run/netns/cni-8690f0ea-235d-f531-5953-c300ddae8901" Nov 8 00:21:57.629984 containerd[1457]: 2025-11-08 00:21:57.266 [INFO][4084] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" iface="eth0" netns="/var/run/netns/cni-8690f0ea-235d-f531-5953-c300ddae8901" Nov 8 00:21:57.629984 containerd[1457]: 2025-11-08 00:21:57.268 [INFO][4084] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" iface="eth0" netns="/var/run/netns/cni-8690f0ea-235d-f531-5953-c300ddae8901" Nov 8 00:21:57.629984 containerd[1457]: 2025-11-08 00:21:57.268 [INFO][4084] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Nov 8 00:21:57.629984 containerd[1457]: 2025-11-08 00:21:57.268 [INFO][4084] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Nov 8 00:21:57.629984 containerd[1457]: 2025-11-08 00:21:57.294 [INFO][4130] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" HandleID="k8s-pod-network.a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Workload="localhost-k8s-csi--node--driver--67lwd-eth0" Nov 8 00:21:57.629984 containerd[1457]: 2025-11-08 00:21:57.294 [INFO][4130] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:57.629984 containerd[1457]: 2025-11-08 00:21:57.605 [INFO][4130] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:57.629984 containerd[1457]: 2025-11-08 00:21:57.612 [WARNING][4130] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" HandleID="k8s-pod-network.a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Workload="localhost-k8s-csi--node--driver--67lwd-eth0" Nov 8 00:21:57.629984 containerd[1457]: 2025-11-08 00:21:57.612 [INFO][4130] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" HandleID="k8s-pod-network.a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Workload="localhost-k8s-csi--node--driver--67lwd-eth0" Nov 8 00:21:57.629984 containerd[1457]: 2025-11-08 00:21:57.617 [INFO][4130] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:57.629984 containerd[1457]: 2025-11-08 00:21:57.625 [INFO][4084] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Nov 8 00:21:57.630370 containerd[1457]: time="2025-11-08T00:21:57.630255371Z" level=info msg="TearDown network for sandbox \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\" successfully" Nov 8 00:21:57.630370 containerd[1457]: time="2025-11-08T00:21:57.630288744Z" level=info msg="StopPodSandbox for \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\" returns successfully" Nov 8 00:21:57.631685 containerd[1457]: time="2025-11-08T00:21:57.631000429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-67lwd,Uid:6fb889d5-2903-4e6b-a458-6fb9eecb4dcd,Namespace:calico-system,Attempt:1,}" Nov 8 00:21:57.631970 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:21:57.632401 systemd[1]: sshd@7-10.0.0.49:22-10.0.0.1:34012.service: Deactivated successfully. Nov 8 00:21:57.634351 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:21:57.635335 systemd-logind[1447]: Removed session 8. Nov 8 00:21:57.844611 containerd[1457]: time="2025-11-08T00:21:57.844170716Z" level=info msg="StopPodSandbox for \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\"" Nov 8 00:21:57.891167 systemd[1]: run-netns-cni\x2d8690f0ea\x2d235d\x2df531\x2d5953\x2dc300ddae8901.mount: Deactivated successfully. Nov 8 00:21:57.993838 systemd-networkd[1388]: cali95e44acbb14: Link UP Nov 8 00:21:57.996398 systemd-networkd[1388]: cali95e44acbb14: Gained carrier Nov 8 00:21:58.016717 containerd[1457]: 2025-11-08 00:21:57.866 [INFO][4193] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0 calico-kube-controllers-86f57bbc6c- calico-system 5ca7b27d-c4bf-4555-ac93-a9fa936c758c 951 0 2025-11-08 00:21:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86f57bbc6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-86f57bbc6c-bq2jn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali95e44acbb14 [] [] }} ContainerID="dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62" Namespace="calico-system" Pod="calico-kube-controllers-86f57bbc6c-bq2jn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-" Nov 8 00:21:58.016717 containerd[1457]: 2025-11-08 00:21:57.866 [INFO][4193] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62" Namespace="calico-system" Pod="calico-kube-controllers-86f57bbc6c-bq2jn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0" Nov 8 00:21:58.016717 containerd[1457]: 2025-11-08 00:21:57.939 [INFO][4276] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62" HandleID="k8s-pod-network.dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62" Workload="localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0" Nov 8 00:21:58.016717 containerd[1457]: 2025-11-08 00:21:57.940 [INFO][4276] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62" HandleID="k8s-pod-network.dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62" Workload="localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eec0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-86f57bbc6c-bq2jn", "timestamp":"2025-11-08 00:21:57.939731309 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:58.016717 containerd[1457]: 2025-11-08 00:21:57.940 [INFO][4276] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:58.016717 containerd[1457]: 2025-11-08 00:21:57.940 [INFO][4276] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:58.016717 containerd[1457]: 2025-11-08 00:21:57.940 [INFO][4276] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:21:58.016717 containerd[1457]: 2025-11-08 00:21:57.947 [INFO][4276] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62" host="localhost" Nov 8 00:21:58.016717 containerd[1457]: 2025-11-08 00:21:57.953 [INFO][4276] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:21:58.016717 containerd[1457]: 2025-11-08 00:21:57.957 [INFO][4276] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:21:58.016717 containerd[1457]: 2025-11-08 00:21:57.959 [INFO][4276] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:58.016717 containerd[1457]: 2025-11-08 00:21:57.962 [INFO][4276] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:58.016717 containerd[1457]: 2025-11-08 00:21:57.962 [INFO][4276] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62" host="localhost" Nov 8 00:21:58.016717 containerd[1457]: 2025-11-08 00:21:57.966 [INFO][4276] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62 Nov 8 00:21:58.016717 containerd[1457]: 2025-11-08 00:21:57.974 [INFO][4276] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62" host="localhost" Nov 8 00:21:58.016717 containerd[1457]: 2025-11-08 00:21:57.979 [INFO][4276] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62" host="localhost" Nov 8 00:21:58.016717 containerd[1457]: 2025-11-08 00:21:57.979 [INFO][4276] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62" host="localhost" Nov 8 00:21:58.016717 containerd[1457]: 2025-11-08 00:21:57.979 [INFO][4276] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:58.016717 containerd[1457]: 2025-11-08 00:21:57.979 [INFO][4276] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62" HandleID="k8s-pod-network.dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62" Workload="localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0" Nov 8 00:21:58.017673 containerd[1457]: 2025-11-08 00:21:57.987 [INFO][4193] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62" Namespace="calico-system" Pod="calico-kube-controllers-86f57bbc6c-bq2jn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0", GenerateName:"calico-kube-controllers-86f57bbc6c-", Namespace:"calico-system", SelfLink:"", UID:"5ca7b27d-c4bf-4555-ac93-a9fa936c758c", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86f57bbc6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-86f57bbc6c-bq2jn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95e44acbb14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:58.017673 containerd[1457]: 2025-11-08 00:21:57.988 [INFO][4193] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62" Namespace="calico-system" Pod="calico-kube-controllers-86f57bbc6c-bq2jn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0" Nov 8 00:21:58.017673 containerd[1457]: 2025-11-08 00:21:57.988 [INFO][4193] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95e44acbb14 ContainerID="dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62" Namespace="calico-system" Pod="calico-kube-controllers-86f57bbc6c-bq2jn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0" Nov 8 00:21:58.017673 containerd[1457]: 2025-11-08 00:21:57.996 [INFO][4193] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62" Namespace="calico-system" Pod="calico-kube-controllers-86f57bbc6c-bq2jn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0" Nov 8 00:21:58.017673 containerd[1457]: 2025-11-08 00:21:57.998 [INFO][4193] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62" Namespace="calico-system" Pod="calico-kube-controllers-86f57bbc6c-bq2jn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0", GenerateName:"calico-kube-controllers-86f57bbc6c-", Namespace:"calico-system", SelfLink:"", UID:"5ca7b27d-c4bf-4555-ac93-a9fa936c758c", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86f57bbc6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62", Pod:"calico-kube-controllers-86f57bbc6c-bq2jn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95e44acbb14", MAC:"5a:76:eb:75:8e:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:58.017673 containerd[1457]: 2025-11-08 00:21:58.013 [INFO][4193] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62" Namespace="calico-system" Pod="calico-kube-controllers-86f57bbc6c-bq2jn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0" Nov 8 00:21:58.046300 containerd[1457]: time="2025-11-08T00:21:58.046012192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:58.046300 containerd[1457]: time="2025-11-08T00:21:58.046081863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:58.046300 containerd[1457]: time="2025-11-08T00:21:58.046095548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:58.046300 containerd[1457]: time="2025-11-08T00:21:58.046189304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:58.071988 systemd[1]: Started cri-containerd-dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62.scope - libcontainer container dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62. Nov 8 00:21:58.090004 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:21:58.101111 systemd-networkd[1388]: cali589d23ba101: Link UP Nov 8 00:21:58.105883 systemd-networkd[1388]: cali589d23ba101: Gained carrier Nov 8 00:21:58.119614 containerd[1457]: 2025-11-08 00:21:57.928 [INFO][4216] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Nov 8 00:21:58.119614 containerd[1457]: 2025-11-08 00:21:57.930 [INFO][4216] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" iface="eth0" netns="/var/run/netns/cni-87e8ce54-b74d-e790-5d0e-d36399bb2c71" Nov 8 00:21:58.119614 containerd[1457]: 2025-11-08 00:21:57.930 [INFO][4216] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" iface="eth0" netns="/var/run/netns/cni-87e8ce54-b74d-e790-5d0e-d36399bb2c71" Nov 8 00:21:58.119614 containerd[1457]: 2025-11-08 00:21:57.931 [INFO][4216] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" iface="eth0" netns="/var/run/netns/cni-87e8ce54-b74d-e790-5d0e-d36399bb2c71" Nov 8 00:21:58.119614 containerd[1457]: 2025-11-08 00:21:57.931 [INFO][4216] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Nov 8 00:21:58.119614 containerd[1457]: 2025-11-08 00:21:57.931 [INFO][4216] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Nov 8 00:21:58.119614 containerd[1457]: 2025-11-08 00:21:57.986 [INFO][4297] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" HandleID="k8s-pod-network.4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Workload="localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0" Nov 8 00:21:58.119614 containerd[1457]: 2025-11-08 00:21:57.986 [INFO][4297] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:58.119614 containerd[1457]: 2025-11-08 00:21:58.084 [INFO][4297] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:58.119614 containerd[1457]: 2025-11-08 00:21:58.094 [WARNING][4297] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" HandleID="k8s-pod-network.4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Workload="localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0" Nov 8 00:21:58.119614 containerd[1457]: 2025-11-08 00:21:58.094 [INFO][4297] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" HandleID="k8s-pod-network.4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Workload="localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0" Nov 8 00:21:58.119614 containerd[1457]: 2025-11-08 00:21:58.100 [INFO][4297] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:58.119614 containerd[1457]: 2025-11-08 00:21:58.113 [INFO][4216] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Nov 8 00:21:58.120138 containerd[1457]: time="2025-11-08T00:21:58.119829702Z" level=info msg="TearDown network for sandbox \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\" successfully" Nov 8 00:21:58.120138 containerd[1457]: time="2025-11-08T00:21:58.119855650Z" level=info msg="StopPodSandbox for \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\" returns successfully" Nov 8 00:21:58.120782 containerd[1457]: time="2025-11-08T00:21:58.120441059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b76c59587-szgkh,Uid:ad60f746-d64e-4394-bbcc-99e4406b9d56,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:21:58.124302 systemd[1]: run-netns-cni\x2d87e8ce54\x2db74d\x2de790\x2d5d0e\x2dd36399bb2c71.mount: Deactivated successfully. Nov 8 00:21:58.129498 containerd[1457]: 2025-11-08 00:21:57.930 [INFO][4223] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--vwhc4-eth0 goldmane-666569f655- calico-system 2c05d744-421f-40ca-8faf-61db719dbbcd 952 0 2025-11-08 00:21:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-vwhc4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali589d23ba101 [] [] }} ContainerID="9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8" Namespace="calico-system" Pod="goldmane-666569f655-vwhc4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vwhc4-" Nov 8 00:21:58.129498 containerd[1457]: 2025-11-08 00:21:57.932 [INFO][4223] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8" Namespace="calico-system" Pod="goldmane-666569f655-vwhc4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vwhc4-eth0" Nov 8 00:21:58.129498 containerd[1457]: 2025-11-08 00:21:57.981 [INFO][4304] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8" HandleID="k8s-pod-network.9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8" Workload="localhost-k8s-goldmane--666569f655--vwhc4-eth0" Nov 8 00:21:58.129498 containerd[1457]: 2025-11-08 00:21:57.981 [INFO][4304] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8" HandleID="k8s-pod-network.9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8" Workload="localhost-k8s-goldmane--666569f655--vwhc4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000346fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-vwhc4", "timestamp":"2025-11-08 00:21:57.981332957 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:58.129498 containerd[1457]: 2025-11-08 00:21:57.981 [INFO][4304] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:58.129498 containerd[1457]: 2025-11-08 00:21:57.981 [INFO][4304] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:58.129498 containerd[1457]: 2025-11-08 00:21:57.981 [INFO][4304] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:21:58.129498 containerd[1457]: 2025-11-08 00:21:58.049 [INFO][4304] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8" host="localhost" Nov 8 00:21:58.129498 containerd[1457]: 2025-11-08 00:21:58.059 [INFO][4304] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:21:58.129498 containerd[1457]: 2025-11-08 00:21:58.065 [INFO][4304] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:21:58.129498 containerd[1457]: 2025-11-08 00:21:58.067 [INFO][4304] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:58.129498 containerd[1457]: 2025-11-08 00:21:58.069 [INFO][4304] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:58.129498 containerd[1457]: 2025-11-08 00:21:58.069 [INFO][4304] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8" host="localhost" Nov 8 00:21:58.129498 containerd[1457]: 2025-11-08 00:21:58.071 [INFO][4304] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8 Nov 8 00:21:58.129498 containerd[1457]: 2025-11-08 00:21:58.075 [INFO][4304] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8" host="localhost" Nov 8 00:21:58.129498 containerd[1457]: 2025-11-08 00:21:58.084 [INFO][4304] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8" host="localhost" Nov 8 00:21:58.129498 containerd[1457]: 2025-11-08 00:21:58.084 [INFO][4304] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8" host="localhost" Nov 8 00:21:58.129498 containerd[1457]: 2025-11-08 00:21:58.084 [INFO][4304] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:58.129498 containerd[1457]: 2025-11-08 00:21:58.084 [INFO][4304] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8" HandleID="k8s-pod-network.9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8" Workload="localhost-k8s-goldmane--666569f655--vwhc4-eth0" Nov 8 00:21:58.130212 containerd[1457]: 2025-11-08 00:21:58.088 [INFO][4223] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8" Namespace="calico-system" Pod="goldmane-666569f655-vwhc4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vwhc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vwhc4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2c05d744-421f-40ca-8faf-61db719dbbcd", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-vwhc4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali589d23ba101", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:58.130212 containerd[1457]: 2025-11-08 00:21:58.089 [INFO][4223] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8" Namespace="calico-system" Pod="goldmane-666569f655-vwhc4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vwhc4-eth0" Nov 8 00:21:58.130212 containerd[1457]: 2025-11-08 00:21:58.089 [INFO][4223] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali589d23ba101 ContainerID="9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8" Namespace="calico-system" Pod="goldmane-666569f655-vwhc4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vwhc4-eth0" Nov 8 00:21:58.130212 containerd[1457]: 2025-11-08 00:21:58.109 [INFO][4223] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8" Namespace="calico-system" Pod="goldmane-666569f655-vwhc4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vwhc4-eth0" Nov 8 00:21:58.130212 containerd[1457]: 2025-11-08 00:21:58.111 [INFO][4223] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8" Namespace="calico-system" Pod="goldmane-666569f655-vwhc4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vwhc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vwhc4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2c05d744-421f-40ca-8faf-61db719dbbcd", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8", Pod:"goldmane-666569f655-vwhc4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali589d23ba101", MAC:"06:f8:7b:18:7a:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:58.130212 containerd[1457]: 2025-11-08 00:21:58.120 [INFO][4223] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8" Namespace="calico-system" Pod="goldmane-666569f655-vwhc4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vwhc4-eth0" Nov 8 00:21:58.132663 containerd[1457]: time="2025-11-08T00:21:58.132618967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f57bbc6c-bq2jn,Uid:5ca7b27d-c4bf-4555-ac93-a9fa936c758c,Namespace:calico-system,Attempt:1,} returns sandbox id \"dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62\"" Nov 8 00:21:58.135393 containerd[1457]: time="2025-11-08T00:21:58.135329252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:21:58.169324 containerd[1457]: time="2025-11-08T00:21:58.169098011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:58.169447 containerd[1457]: time="2025-11-08T00:21:58.169350624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:58.169447 containerd[1457]: time="2025-11-08T00:21:58.169406760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:58.169892 containerd[1457]: time="2025-11-08T00:21:58.169856193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:58.198611 systemd[1]: Started cri-containerd-9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8.scope - libcontainer container 9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8. Nov 8 00:21:58.205745 systemd-networkd[1388]: cali21988fea43c: Link UP Nov 8 00:21:58.207518 systemd-networkd[1388]: cali21988fea43c: Gained carrier Nov 8 00:21:58.225019 containerd[1457]: 2025-11-08 00:21:57.919 [INFO][4231] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--84655bbcf4--42fp6-eth0 whisker-84655bbcf4- calico-system 8afc460d-c7d5-4574-a129-acae64d116ee 975 0 2025-11-08 00:21:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:84655bbcf4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-84655bbcf4-42fp6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali21988fea43c [] [] }} ContainerID="7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8" Namespace="calico-system" Pod="whisker-84655bbcf4-42fp6" WorkloadEndpoint="localhost-k8s-whisker--84655bbcf4--42fp6-" Nov 8 00:21:58.225019 containerd[1457]: 2025-11-08 00:21:57.919 [INFO][4231] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8" Namespace="calico-system" Pod="whisker-84655bbcf4-42fp6" WorkloadEndpoint="localhost-k8s-whisker--84655bbcf4--42fp6-eth0" Nov 8 00:21:58.225019 containerd[1457]: 2025-11-08 00:21:57.992 [INFO][4293] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8" HandleID="k8s-pod-network.7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8" Workload="localhost-k8s-whisker--84655bbcf4--42fp6-eth0" Nov 8 00:21:58.225019 containerd[1457]: 2025-11-08 00:21:57.994 [INFO][4293] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8" HandleID="k8s-pod-network.7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8" Workload="localhost-k8s-whisker--84655bbcf4--42fp6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000540e90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-84655bbcf4-42fp6", "timestamp":"2025-11-08 00:21:57.992507644 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:58.225019 containerd[1457]: 2025-11-08 00:21:57.994 [INFO][4293] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:58.225019 containerd[1457]: 2025-11-08 00:21:58.101 [INFO][4293] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:58.225019 containerd[1457]: 2025-11-08 00:21:58.101 [INFO][4293] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:21:58.225019 containerd[1457]: 2025-11-08 00:21:58.149 [INFO][4293] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8" host="localhost" Nov 8 00:21:58.225019 containerd[1457]: 2025-11-08 00:21:58.159 [INFO][4293] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:21:58.225019 containerd[1457]: 2025-11-08 00:21:58.166 [INFO][4293] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:21:58.225019 containerd[1457]: 2025-11-08 00:21:58.168 [INFO][4293] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:58.225019 containerd[1457]: 2025-11-08 00:21:58.170 [INFO][4293] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:58.225019 containerd[1457]: 2025-11-08 00:21:58.170 [INFO][4293] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8" host="localhost" Nov 8 00:21:58.225019 containerd[1457]: 2025-11-08 00:21:58.171 [INFO][4293] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8 Nov 8 00:21:58.225019 containerd[1457]: 2025-11-08 00:21:58.175 [INFO][4293] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8" host="localhost" Nov 8 00:21:58.225019 containerd[1457]: 2025-11-08 00:21:58.185 [INFO][4293] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8" host="localhost" Nov 8 00:21:58.225019 containerd[1457]: 2025-11-08 00:21:58.186 [INFO][4293] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8" host="localhost" Nov 8 00:21:58.225019 containerd[1457]: 2025-11-08 00:21:58.186 [INFO][4293] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:58.225019 containerd[1457]: 2025-11-08 00:21:58.186 [INFO][4293] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8" HandleID="k8s-pod-network.7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8" Workload="localhost-k8s-whisker--84655bbcf4--42fp6-eth0" Nov 8 00:21:58.225759 containerd[1457]: 2025-11-08 00:21:58.199 [INFO][4231] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8" Namespace="calico-system" Pod="whisker-84655bbcf4-42fp6" WorkloadEndpoint="localhost-k8s-whisker--84655bbcf4--42fp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84655bbcf4--42fp6-eth0", GenerateName:"whisker-84655bbcf4-", Namespace:"calico-system", SelfLink:"", UID:"8afc460d-c7d5-4574-a129-acae64d116ee", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84655bbcf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-84655bbcf4-42fp6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali21988fea43c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:58.225759 containerd[1457]: 2025-11-08 00:21:58.200 [INFO][4231] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8" Namespace="calico-system" Pod="whisker-84655bbcf4-42fp6" WorkloadEndpoint="localhost-k8s-whisker--84655bbcf4--42fp6-eth0" Nov 8 00:21:58.225759 containerd[1457]: 2025-11-08 00:21:58.200 [INFO][4231] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21988fea43c ContainerID="7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8" Namespace="calico-system" Pod="whisker-84655bbcf4-42fp6" WorkloadEndpoint="localhost-k8s-whisker--84655bbcf4--42fp6-eth0" Nov 8 00:21:58.225759 containerd[1457]: 2025-11-08 00:21:58.208 [INFO][4231] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8" Namespace="calico-system" Pod="whisker-84655bbcf4-42fp6" WorkloadEndpoint="localhost-k8s-whisker--84655bbcf4--42fp6-eth0" Nov 8 00:21:58.225759 containerd[1457]: 2025-11-08 00:21:58.209 [INFO][4231] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8" Namespace="calico-system" Pod="whisker-84655bbcf4-42fp6" WorkloadEndpoint="localhost-k8s-whisker--84655bbcf4--42fp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84655bbcf4--42fp6-eth0", GenerateName:"whisker-84655bbcf4-", Namespace:"calico-system", SelfLink:"", UID:"8afc460d-c7d5-4574-a129-acae64d116ee", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84655bbcf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8", Pod:"whisker-84655bbcf4-42fp6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali21988fea43c", MAC:"92:51:b8:fe:d0:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:58.225759 containerd[1457]: 2025-11-08 00:21:58.220 [INFO][4231] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8" Namespace="calico-system" Pod="whisker-84655bbcf4-42fp6" WorkloadEndpoint="localhost-k8s-whisker--84655bbcf4--42fp6-eth0" Nov 8 00:21:58.233369 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:21:58.244602 containerd[1457]: time="2025-11-08T00:21:58.244447034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:58.244602 containerd[1457]: time="2025-11-08T00:21:58.244519069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:58.244791 containerd[1457]: time="2025-11-08T00:21:58.244536822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:58.245454 containerd[1457]: time="2025-11-08T00:21:58.245419799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:58.265959 systemd[1]: Started cri-containerd-7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8.scope - libcontainer container 7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8. Nov 8 00:21:58.270962 containerd[1457]: time="2025-11-08T00:21:58.270848349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vwhc4,Uid:2c05d744-421f-40ca-8faf-61db719dbbcd,Namespace:calico-system,Attempt:1,} returns sandbox id \"9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8\"" Nov 8 00:21:58.291316 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:21:58.309423 systemd-networkd[1388]: cali70c7f638d2b: Link UP Nov 8 00:21:58.310606 systemd-networkd[1388]: cali70c7f638d2b: Gained carrier Nov 8 00:21:58.324788 containerd[1457]: time="2025-11-08T00:21:58.324746675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84655bbcf4-42fp6,Uid:8afc460d-c7d5-4574-a129-acae64d116ee,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f7b8d99fcaf6109812eb6f6b41b61703c34b9ad7b29b9a8f749203a980506a8\"" Nov 8 00:21:58.332611 containerd[1457]: 2025-11-08 00:21:57.946 [INFO][4258] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--67lwd-eth0 csi-node-driver- calico-system 6fb889d5-2903-4e6b-a458-6fb9eecb4dcd 978 0 2025-11-08 00:21:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-67lwd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali70c7f638d2b [] [] }} ContainerID="888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828" Namespace="calico-system" Pod="csi-node-driver-67lwd" WorkloadEndpoint="localhost-k8s-csi--node--driver--67lwd-" Nov 8 00:21:58.332611 containerd[1457]: 2025-11-08 00:21:57.946 [INFO][4258] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828" Namespace="calico-system" Pod="csi-node-driver-67lwd" WorkloadEndpoint="localhost-k8s-csi--node--driver--67lwd-eth0" Nov 8 00:21:58.332611 containerd[1457]: 2025-11-08 00:21:58.001 [INFO][4315] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828" HandleID="k8s-pod-network.888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828" Workload="localhost-k8s-csi--node--driver--67lwd-eth0" Nov 8 00:21:58.332611 containerd[1457]: 2025-11-08 00:21:58.001 [INFO][4315] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828" HandleID="k8s-pod-network.888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828" Workload="localhost-k8s-csi--node--driver--67lwd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f7a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-67lwd", "timestamp":"2025-11-08 00:21:58.001292968 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:58.332611 containerd[1457]: 2025-11-08 00:21:58.001 [INFO][4315] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:58.332611 containerd[1457]: 2025-11-08 00:21:58.186 [INFO][4315] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:58.332611 containerd[1457]: 2025-11-08 00:21:58.191 [INFO][4315] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:21:58.332611 containerd[1457]: 2025-11-08 00:21:58.250 [INFO][4315] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828" host="localhost" Nov 8 00:21:58.332611 containerd[1457]: 2025-11-08 00:21:58.259 [INFO][4315] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:21:58.332611 containerd[1457]: 2025-11-08 00:21:58.268 [INFO][4315] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:21:58.332611 containerd[1457]: 2025-11-08 00:21:58.271 [INFO][4315] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:58.332611 containerd[1457]: 2025-11-08 00:21:58.275 [INFO][4315] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:58.332611 containerd[1457]: 2025-11-08 00:21:58.275 [INFO][4315] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828" host="localhost" Nov 8 00:21:58.332611 containerd[1457]: 2025-11-08 00:21:58.278 [INFO][4315] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828 Nov 8 00:21:58.332611 containerd[1457]: 2025-11-08 00:21:58.288 [INFO][4315] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828" host="localhost" Nov 8 00:21:58.332611 containerd[1457]: 2025-11-08 00:21:58.300 [INFO][4315] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828" host="localhost" Nov 8 00:21:58.332611 containerd[1457]: 2025-11-08 00:21:58.300 [INFO][4315] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828" host="localhost" Nov 8 00:21:58.332611 containerd[1457]: 2025-11-08 00:21:58.300 [INFO][4315] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:58.332611 containerd[1457]: 2025-11-08 00:21:58.300 [INFO][4315] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828" HandleID="k8s-pod-network.888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828" Workload="localhost-k8s-csi--node--driver--67lwd-eth0" Nov 8 00:21:58.333147 containerd[1457]: 2025-11-08 00:21:58.305 [INFO][4258] cni-plugin/k8s.go 418: Populated endpoint ContainerID="888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828" Namespace="calico-system" Pod="csi-node-driver-67lwd" WorkloadEndpoint="localhost-k8s-csi--node--driver--67lwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--67lwd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6fb889d5-2903-4e6b-a458-6fb9eecb4dcd", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-67lwd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali70c7f638d2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:58.333147 containerd[1457]: 2025-11-08 00:21:58.305 [INFO][4258] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828" Namespace="calico-system" Pod="csi-node-driver-67lwd" WorkloadEndpoint="localhost-k8s-csi--node--driver--67lwd-eth0" Nov 8 00:21:58.333147 containerd[1457]: 2025-11-08 00:21:58.305 [INFO][4258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70c7f638d2b ContainerID="888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828" Namespace="calico-system" Pod="csi-node-driver-67lwd" WorkloadEndpoint="localhost-k8s-csi--node--driver--67lwd-eth0" Nov 8 00:21:58.333147 containerd[1457]: 2025-11-08 00:21:58.310 [INFO][4258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828" Namespace="calico-system" Pod="csi-node-driver-67lwd" WorkloadEndpoint="localhost-k8s-csi--node--driver--67lwd-eth0" Nov 8 00:21:58.333147 containerd[1457]: 2025-11-08 00:21:58.311 [INFO][4258] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828" Namespace="calico-system" Pod="csi-node-driver-67lwd" WorkloadEndpoint="localhost-k8s-csi--node--driver--67lwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--67lwd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6fb889d5-2903-4e6b-a458-6fb9eecb4dcd", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828", Pod:"csi-node-driver-67lwd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali70c7f638d2b", MAC:"3a:c3:2e:ec:fc:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:58.333147 containerd[1457]: 2025-11-08 00:21:58.328 [INFO][4258] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828" Namespace="calico-system" Pod="csi-node-driver-67lwd" WorkloadEndpoint="localhost-k8s-csi--node--driver--67lwd-eth0" Nov 8 00:21:58.360481 containerd[1457]: time="2025-11-08T00:21:58.359460917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:58.360481 containerd[1457]: time="2025-11-08T00:21:58.360119002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:58.360481 containerd[1457]: time="2025-11-08T00:21:58.360134552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:58.360481 containerd[1457]: time="2025-11-08T00:21:58.360229840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:58.384977 systemd[1]: Started cri-containerd-888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828.scope - libcontainer container 888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828. Nov 8 00:21:58.396097 systemd-networkd[1388]: cali97138940eef: Link UP Nov 8 00:21:58.396984 systemd-networkd[1388]: cali97138940eef: Gained carrier Nov 8 00:21:58.402541 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:21:58.413454 containerd[1457]: 2025-11-08 00:21:57.954 [INFO][4249] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--htskj-eth0 coredns-674b8bbfcf- kube-system da9743a8-c863-487c-b161-786bc9c10f6c 965 0 2025-11-08 00:21:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-htskj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali97138940eef [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59" Namespace="kube-system" Pod="coredns-674b8bbfcf-htskj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--htskj-" Nov 8 00:21:58.413454 containerd[1457]: 2025-11-08 00:21:57.955 [INFO][4249] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59" Namespace="kube-system" Pod="coredns-674b8bbfcf-htskj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--htskj-eth0" Nov 8 00:21:58.413454 containerd[1457]: 2025-11-08 00:21:58.015 [INFO][4320] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59" HandleID="k8s-pod-network.a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59" Workload="localhost-k8s-coredns--674b8bbfcf--htskj-eth0" Nov 8 00:21:58.413454 containerd[1457]: 2025-11-08 00:21:58.015 [INFO][4320] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59" HandleID="k8s-pod-network.a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59" Workload="localhost-k8s-coredns--674b8bbfcf--htskj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000372550), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-htskj", "timestamp":"2025-11-08 00:21:58.015395628 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:58.413454 containerd[1457]: 2025-11-08 00:21:58.015 [INFO][4320] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:58.413454 containerd[1457]: 2025-11-08 00:21:58.300 [INFO][4320] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:58.413454 containerd[1457]: 2025-11-08 00:21:58.301 [INFO][4320] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:21:58.413454 containerd[1457]: 2025-11-08 00:21:58.349 [INFO][4320] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59" host="localhost" Nov 8 00:21:58.413454 containerd[1457]: 2025-11-08 00:21:58.358 [INFO][4320] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:21:58.413454 containerd[1457]: 2025-11-08 00:21:58.367 [INFO][4320] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:21:58.413454 containerd[1457]: 2025-11-08 00:21:58.369 [INFO][4320] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:58.413454 containerd[1457]: 2025-11-08 00:21:58.372 [INFO][4320] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:58.413454 containerd[1457]: 2025-11-08 00:21:58.372 [INFO][4320] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59" host="localhost" Nov 8 00:21:58.413454 containerd[1457]: 2025-11-08 00:21:58.373 [INFO][4320] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59 Nov 8 00:21:58.413454 containerd[1457]: 2025-11-08 00:21:58.379 [INFO][4320] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59" host="localhost" Nov 8 00:21:58.413454 containerd[1457]: 2025-11-08 00:21:58.384 [INFO][4320] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59" host="localhost" Nov 8 00:21:58.413454 containerd[1457]: 2025-11-08 00:21:58.384 [INFO][4320] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59" host="localhost" Nov 8 00:21:58.413454 containerd[1457]: 2025-11-08 00:21:58.384 [INFO][4320] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:58.413454 containerd[1457]: 2025-11-08 00:21:58.384 [INFO][4320] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59" HandleID="k8s-pod-network.a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59" Workload="localhost-k8s-coredns--674b8bbfcf--htskj-eth0" Nov 8 00:21:58.414053 containerd[1457]: 2025-11-08 00:21:58.388 [INFO][4249] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59" Namespace="kube-system" Pod="coredns-674b8bbfcf-htskj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--htskj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--htskj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"da9743a8-c863-487c-b161-786bc9c10f6c", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-htskj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97138940eef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:58.414053 containerd[1457]: 2025-11-08 00:21:58.388 [INFO][4249] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59" Namespace="kube-system" Pod="coredns-674b8bbfcf-htskj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--htskj-eth0" Nov 8 00:21:58.414053 containerd[1457]: 2025-11-08 00:21:58.388 [INFO][4249] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97138940eef ContainerID="a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59" Namespace="kube-system" Pod="coredns-674b8bbfcf-htskj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--htskj-eth0" Nov 8 00:21:58.414053 containerd[1457]: 2025-11-08 00:21:58.398 [INFO][4249] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59" Namespace="kube-system" Pod="coredns-674b8bbfcf-htskj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--htskj-eth0" Nov 8 00:21:58.414053 containerd[1457]: 2025-11-08 00:21:58.398 [INFO][4249] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59" Namespace="kube-system" Pod="coredns-674b8bbfcf-htskj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--htskj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--htskj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"da9743a8-c863-487c-b161-786bc9c10f6c", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59", Pod:"coredns-674b8bbfcf-htskj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97138940eef", MAC:"72:b5:da:66:99:f6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:58.414053 containerd[1457]: 2025-11-08 00:21:58.410 [INFO][4249] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59" Namespace="kube-system" Pod="coredns-674b8bbfcf-htskj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--htskj-eth0" Nov 8 00:21:58.420148 containerd[1457]: time="2025-11-08T00:21:58.420111974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-67lwd,Uid:6fb889d5-2903-4e6b-a458-6fb9eecb4dcd,Namespace:calico-system,Attempt:1,} returns sandbox id \"888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828\"" Nov 8 00:21:58.435689 containerd[1457]: time="2025-11-08T00:21:58.435553135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:58.435689 containerd[1457]: time="2025-11-08T00:21:58.435649245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:58.435689 containerd[1457]: time="2025-11-08T00:21:58.435670565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:58.435875 containerd[1457]: time="2025-11-08T00:21:58.435777416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:58.454951 systemd[1]: Started cri-containerd-a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59.scope - libcontainer container a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59. Nov 8 00:21:58.469230 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:21:58.493563 containerd[1457]: time="2025-11-08T00:21:58.493514424Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:58.493870 systemd-networkd[1388]: cali1ffef67e1f1: Link UP Nov 8 00:21:58.495450 systemd-networkd[1388]: cali1ffef67e1f1: Gained carrier Nov 8 00:21:58.503844 containerd[1457]: time="2025-11-08T00:21:58.494956791Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:21:58.503844 containerd[1457]: time="2025-11-08T00:21:58.496526155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:21:58.503844 containerd[1457]: time="2025-11-08T00:21:58.498898667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-htskj,Uid:da9743a8-c863-487c-b161-786bc9c10f6c,Namespace:kube-system,Attempt:1,} returns sandbox id \"a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59\"" Nov 8 00:21:58.504058 kubelet[2518]: E1108 00:21:58.503597 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:21:58.504058 kubelet[2518]: E1108 00:21:58.503635 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:21:58.504466 kubelet[2518]: E1108 00:21:58.503849 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcjcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86f57bbc6c-bq2jn_calico-system(5ca7b27d-c4bf-4555-ac93-a9fa936c758c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:58.505058 containerd[1457]: time="2025-11-08T00:21:58.504989915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:21:58.505224 kubelet[2518]: E1108 00:21:58.505212 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:58.505435 kubelet[2518]: E1108 00:21:58.505371 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86f57bbc6c-bq2jn" podUID="5ca7b27d-c4bf-4555-ac93-a9fa936c758c" Nov 8 00:21:58.514174 containerd[1457]: time="2025-11-08T00:21:58.514127019Z" level=info msg="CreateContainer within sandbox \"a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:21:58.515836 containerd[1457]: 2025-11-08 00:21:58.187 [INFO][4406] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0 calico-apiserver-5b76c59587- calico-apiserver ad60f746-d64e-4394-bbcc-99e4406b9d56 982 0 2025-11-08 00:21:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b76c59587 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b76c59587-szgkh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1ffef67e1f1 [] [] }} ContainerID="2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59" Namespace="calico-apiserver" Pod="calico-apiserver-5b76c59587-szgkh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b76c59587--szgkh-" Nov 8 00:21:58.515836 containerd[1457]: 2025-11-08 00:21:58.187 [INFO][4406] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59" Namespace="calico-apiserver" Pod="calico-apiserver-5b76c59587-szgkh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0" Nov 8 00:21:58.515836 containerd[1457]: 2025-11-08 00:21:58.231 [INFO][4447] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59" HandleID="k8s-pod-network.2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59" Workload="localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0" Nov 8 00:21:58.515836 containerd[1457]: 2025-11-08 00:21:58.231 [INFO][4447] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59" HandleID="k8s-pod-network.2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59" Workload="localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003258c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b76c59587-szgkh", "timestamp":"2025-11-08 00:21:58.231545397 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:58.515836 containerd[1457]: 2025-11-08 00:21:58.231 [INFO][4447] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:58.515836 containerd[1457]: 2025-11-08 00:21:58.384 [INFO][4447] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:58.515836 containerd[1457]: 2025-11-08 00:21:58.384 [INFO][4447] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:21:58.515836 containerd[1457]: 2025-11-08 00:21:58.450 [INFO][4447] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59" host="localhost" Nov 8 00:21:58.515836 containerd[1457]: 2025-11-08 00:21:58.460 [INFO][4447] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:21:58.515836 containerd[1457]: 2025-11-08 00:21:58.468 [INFO][4447] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:21:58.515836 containerd[1457]: 2025-11-08 00:21:58.469 [INFO][4447] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:58.515836 containerd[1457]: 2025-11-08 00:21:58.471 [INFO][4447] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:58.515836 containerd[1457]: 2025-11-08 00:21:58.471 [INFO][4447] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59" host="localhost" Nov 8 00:21:58.515836 containerd[1457]: 2025-11-08 00:21:58.474 [INFO][4447] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59 Nov 8 00:21:58.515836 containerd[1457]: 2025-11-08 00:21:58.478 [INFO][4447] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59" host="localhost" Nov 8 00:21:58.515836 containerd[1457]: 2025-11-08 00:21:58.485 [INFO][4447] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59" host="localhost" Nov 8 00:21:58.515836 containerd[1457]: 2025-11-08 00:21:58.485 [INFO][4447] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59" host="localhost" Nov 8 00:21:58.515836 containerd[1457]: 2025-11-08 00:21:58.485 [INFO][4447] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:58.515836 containerd[1457]: 2025-11-08 00:21:58.485 [INFO][4447] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59" HandleID="k8s-pod-network.2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59" Workload="localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0" Nov 8 00:21:58.516319 containerd[1457]: 2025-11-08 00:21:58.490 [INFO][4406] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59" Namespace="calico-apiserver" Pod="calico-apiserver-5b76c59587-szgkh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0", GenerateName:"calico-apiserver-5b76c59587-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad60f746-d64e-4394-bbcc-99e4406b9d56", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b76c59587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b76c59587-szgkh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ffef67e1f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:58.516319 containerd[1457]: 2025-11-08 00:21:58.490 [INFO][4406] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59" Namespace="calico-apiserver" Pod="calico-apiserver-5b76c59587-szgkh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0" Nov 8 00:21:58.516319 containerd[1457]: 2025-11-08 00:21:58.490 [INFO][4406] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ffef67e1f1 ContainerID="2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59" Namespace="calico-apiserver" Pod="calico-apiserver-5b76c59587-szgkh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0" Nov 8 00:21:58.516319 containerd[1457]: 2025-11-08 00:21:58.494 [INFO][4406] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59" Namespace="calico-apiserver" Pod="calico-apiserver-5b76c59587-szgkh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0" Nov 8 00:21:58.516319 containerd[1457]: 2025-11-08 00:21:58.495 [INFO][4406] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59" Namespace="calico-apiserver" Pod="calico-apiserver-5b76c59587-szgkh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0", GenerateName:"calico-apiserver-5b76c59587-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad60f746-d64e-4394-bbcc-99e4406b9d56", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b76c59587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59", Pod:"calico-apiserver-5b76c59587-szgkh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ffef67e1f1", MAC:"aa:c8:ee:a4:6b:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:58.516319 containerd[1457]: 2025-11-08 00:21:58.510 [INFO][4406] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59" Namespace="calico-apiserver" Pod="calico-apiserver-5b76c59587-szgkh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0" Nov 8 00:21:58.533532 containerd[1457]: time="2025-11-08T00:21:58.533481102Z" level=info msg="CreateContainer within sandbox \"a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0fec2b51ae68ab9f7caf38f467fd964e4e81499036fc70445d7f60cc24e79356\"" Nov 8 00:21:58.535219 containerd[1457]: time="2025-11-08T00:21:58.535189858Z" level=info msg="StartContainer for \"0fec2b51ae68ab9f7caf38f467fd964e4e81499036fc70445d7f60cc24e79356\"" Nov 8 00:21:58.536479 containerd[1457]: time="2025-11-08T00:21:58.536377597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:58.536479 containerd[1457]: time="2025-11-08T00:21:58.536466133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:58.536543 containerd[1457]: time="2025-11-08T00:21:58.536488966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:58.536635 containerd[1457]: time="2025-11-08T00:21:58.536602278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:58.566036 systemd[1]: Started cri-containerd-2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59.scope - libcontainer container 2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59. Nov 8 00:21:58.570621 systemd[1]: Started cri-containerd-0fec2b51ae68ab9f7caf38f467fd964e4e81499036fc70445d7f60cc24e79356.scope - libcontainer container 0fec2b51ae68ab9f7caf38f467fd964e4e81499036fc70445d7f60cc24e79356. Nov 8 00:21:58.582786 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:21:58.616971 containerd[1457]: time="2025-11-08T00:21:58.616862275Z" level=info msg="StartContainer for \"0fec2b51ae68ab9f7caf38f467fd964e4e81499036fc70445d7f60cc24e79356\" returns successfully" Nov 8 00:21:58.617836 containerd[1457]: time="2025-11-08T00:21:58.617794184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b76c59587-szgkh,Uid:ad60f746-d64e-4394-bbcc-99e4406b9d56,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59\"" Nov 8 00:21:58.621527 systemd-networkd[1388]: vxlan.calico: Gained IPv6LL Nov 8 00:21:58.845014 containerd[1457]: time="2025-11-08T00:21:58.844967135Z" level=info msg="StopPodSandbox for \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\"" Nov 8 00:21:58.845429 containerd[1457]: time="2025-11-08T00:21:58.845070018Z" level=info msg="StopPodSandbox for \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\"" Nov 8 00:21:58.853162 containerd[1457]: time="2025-11-08T00:21:58.852921600Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:58.872614 containerd[1457]: time="2025-11-08T00:21:58.872462764Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:21:58.873260 containerd[1457]: time="2025-11-08T00:21:58.872857805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:58.873323 kubelet[2518]: E1108 00:21:58.873051 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:21:58.873323 kubelet[2518]: E1108 00:21:58.873112 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:21:58.873615 kubelet[2518]: E1108 00:21:58.873531 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xtxfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vwhc4_calico-system(2c05d744-421f-40ca-8faf-61db719dbbcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:58.873927 containerd[1457]: time="2025-11-08T00:21:58.873649030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:21:58.875437 kubelet[2518]: E1108 00:21:58.875202 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vwhc4" podUID="2c05d744-421f-40ca-8faf-61db719dbbcd" Nov 8 00:21:58.959160 containerd[1457]: 2025-11-08 00:21:58.908 [INFO][4734] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Nov 8 00:21:58.959160 containerd[1457]: 2025-11-08 00:21:58.909 [INFO][4734] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" iface="eth0" netns="/var/run/netns/cni-6c60853b-fe6b-139d-99a6-4c8da850bcab" Nov 8 00:21:58.959160 containerd[1457]: 2025-11-08 00:21:58.909 [INFO][4734] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" iface="eth0" netns="/var/run/netns/cni-6c60853b-fe6b-139d-99a6-4c8da850bcab" Nov 8 00:21:58.959160 containerd[1457]: 2025-11-08 00:21:58.910 [INFO][4734] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" iface="eth0" netns="/var/run/netns/cni-6c60853b-fe6b-139d-99a6-4c8da850bcab" Nov 8 00:21:58.959160 containerd[1457]: 2025-11-08 00:21:58.910 [INFO][4734] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Nov 8 00:21:58.959160 containerd[1457]: 2025-11-08 00:21:58.910 [INFO][4734] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Nov 8 00:21:58.959160 containerd[1457]: 2025-11-08 00:21:58.935 [INFO][4750] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" HandleID="k8s-pod-network.7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Workload="localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0" Nov 8 00:21:58.959160 containerd[1457]: 2025-11-08 00:21:58.935 [INFO][4750] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:58.959160 containerd[1457]: 2025-11-08 00:21:58.935 [INFO][4750] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:58.959160 containerd[1457]: 2025-11-08 00:21:58.946 [WARNING][4750] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" HandleID="k8s-pod-network.7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Workload="localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0" Nov 8 00:21:58.959160 containerd[1457]: 2025-11-08 00:21:58.946 [INFO][4750] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" HandleID="k8s-pod-network.7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Workload="localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0" Nov 8 00:21:58.959160 containerd[1457]: 2025-11-08 00:21:58.949 [INFO][4750] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:58.959160 containerd[1457]: 2025-11-08 00:21:58.954 [INFO][4734] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Nov 8 00:21:58.960264 containerd[1457]: time="2025-11-08T00:21:58.960220819Z" level=info msg="TearDown network for sandbox \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\" successfully" Nov 8 00:21:58.960383 containerd[1457]: time="2025-11-08T00:21:58.960355521Z" level=info msg="StopPodSandbox for \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\" returns successfully" Nov 8 00:21:58.961363 kubelet[2518]: E1108 00:21:58.961311 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:58.962256 systemd[1]: run-netns-cni\x2d6c60853b\x2dfe6b\x2d139d\x2d99a6\x2d4c8da850bcab.mount: Deactivated successfully. Nov 8 00:21:58.965430 containerd[1457]: time="2025-11-08T00:21:58.965094733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c5rwc,Uid:4c9a132b-c373-4aff-a37f-8a647d110275,Namespace:kube-system,Attempt:1,}" Nov 8 00:21:58.986241 containerd[1457]: 2025-11-08 00:21:58.937 [INFO][4735] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Nov 8 00:21:58.986241 containerd[1457]: 2025-11-08 00:21:58.937 [INFO][4735] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" iface="eth0" netns="/var/run/netns/cni-82378aca-04aa-badc-73f9-227b07681973" Nov 8 00:21:58.986241 containerd[1457]: 2025-11-08 00:21:58.937 [INFO][4735] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" iface="eth0" netns="/var/run/netns/cni-82378aca-04aa-badc-73f9-227b07681973" Nov 8 00:21:58.986241 containerd[1457]: 2025-11-08 00:21:58.938 [INFO][4735] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" iface="eth0" netns="/var/run/netns/cni-82378aca-04aa-badc-73f9-227b07681973" Nov 8 00:21:58.986241 containerd[1457]: 2025-11-08 00:21:58.938 [INFO][4735] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Nov 8 00:21:58.986241 containerd[1457]: 2025-11-08 00:21:58.938 [INFO][4735] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Nov 8 00:21:58.986241 containerd[1457]: 2025-11-08 00:21:58.965 [INFO][4758] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" HandleID="k8s-pod-network.e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Workload="localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0" Nov 8 00:21:58.986241 containerd[1457]: 2025-11-08 00:21:58.965 [INFO][4758] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:58.986241 containerd[1457]: 2025-11-08 00:21:58.965 [INFO][4758] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:58.986241 containerd[1457]: 2025-11-08 00:21:58.975 [WARNING][4758] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" HandleID="k8s-pod-network.e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Workload="localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0" Nov 8 00:21:58.986241 containerd[1457]: 2025-11-08 00:21:58.975 [INFO][4758] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" HandleID="k8s-pod-network.e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Workload="localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0" Nov 8 00:21:58.986241 containerd[1457]: 2025-11-08 00:21:58.977 [INFO][4758] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:58.986241 containerd[1457]: 2025-11-08 00:21:58.981 [INFO][4735] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Nov 8 00:21:58.986617 containerd[1457]: time="2025-11-08T00:21:58.986467945Z" level=info msg="TearDown network for sandbox \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\" successfully" Nov 8 00:21:58.986617 containerd[1457]: time="2025-11-08T00:21:58.986505706Z" level=info msg="StopPodSandbox for \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\" returns successfully" Nov 8 00:21:58.987330 containerd[1457]: time="2025-11-08T00:21:58.987300056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b76c59587-pzwfg,Uid:0eac0b9a-9cb8-40ad-80af-819a17da25f0,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:21:58.989132 systemd[1]: run-netns-cni\x2d82378aca\x2d04aa\x2dbadc\x2d73f9\x2d227b07681973.mount: Deactivated successfully. Nov 8 00:21:59.230712 kubelet[2518]: E1108 00:21:59.229474 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:59.232894 containerd[1457]: time="2025-11-08T00:21:59.232843768Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:59.235125 kubelet[2518]: E1108 00:21:59.235063 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vwhc4" podUID="2c05d744-421f-40ca-8faf-61db719dbbcd" Nov 8 00:21:59.235218 kubelet[2518]: E1108 00:21:59.235107 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86f57bbc6c-bq2jn" podUID="5ca7b27d-c4bf-4555-ac93-a9fa936c758c" Nov 8 00:21:59.263972 containerd[1457]: time="2025-11-08T00:21:59.263575997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:21:59.263972 containerd[1457]: time="2025-11-08T00:21:59.263582810Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:21:59.264655 kubelet[2518]: E1108 00:21:59.264617 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:21:59.264847 kubelet[2518]: E1108 00:21:59.264820 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:21:59.265157 kubelet[2518]: E1108 00:21:59.265124 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2875f70c47664434af66c35207c6af08,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cqdcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84655bbcf4-42fp6_calico-system(8afc460d-c7d5-4574-a129-acae64d116ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:59.265971 containerd[1457]: time="2025-11-08T00:21:59.265679493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:21:59.269838 kubelet[2518]: I1108 00:21:59.267971 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-htskj" podStartSLOduration=39.267960753 podStartE2EDuration="39.267960753s" podCreationTimestamp="2025-11-08 00:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:21:59.267503495 +0000 UTC m=+44.495309368" watchObservedRunningTime="2025-11-08 00:21:59.267960753 +0000 UTC m=+44.495766626" Nov 8 00:21:59.580121 systemd-networkd[1388]: cali1ffef67e1f1: Gained IPv6LL Nov 8 00:21:59.638549 containerd[1457]: time="2025-11-08T00:21:59.638417962Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:59.639558 containerd[1457]: time="2025-11-08T00:21:59.639524397Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:21:59.639701 containerd[1457]: time="2025-11-08T00:21:59.639587786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:21:59.639827 kubelet[2518]: E1108 00:21:59.639757 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:21:59.640226 kubelet[2518]: E1108 00:21:59.639827 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:21:59.640226 kubelet[2518]: E1108 00:21:59.640086 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glmpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-67lwd_calico-system(6fb889d5-2903-4e6b-a458-6fb9eecb4dcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:59.640341 containerd[1457]: time="2025-11-08T00:21:59.640152807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:21:59.708103 systemd-networkd[1388]: cali70c7f638d2b: Gained IPv6LL Nov 8 00:21:59.834721 systemd-networkd[1388]: calia67dca5dbfe: Link UP Nov 8 00:21:59.836546 systemd-networkd[1388]: cali95e44acbb14: Gained IPv6LL Nov 8 00:21:59.838893 systemd-networkd[1388]: calia67dca5dbfe: Gained carrier Nov 8 00:21:59.857226 containerd[1457]: 2025-11-08 00:21:59.268 [INFO][4766] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0 coredns-674b8bbfcf- kube-system 4c9a132b-c373-4aff-a37f-8a647d110275 1033 0 2025-11-08 00:21:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-c5rwc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia67dca5dbfe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e" Namespace="kube-system" Pod="coredns-674b8bbfcf-c5rwc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c5rwc-" Nov 8 00:21:59.857226 containerd[1457]: 2025-11-08 00:21:59.269 [INFO][4766] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e" Namespace="kube-system" Pod="coredns-674b8bbfcf-c5rwc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0" Nov 8 00:21:59.857226 containerd[1457]: 2025-11-08 00:21:59.320 [INFO][4794] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e" HandleID="k8s-pod-network.45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e" Workload="localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0" Nov 8 00:21:59.857226 containerd[1457]: 2025-11-08 00:21:59.320 [INFO][4794] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e" HandleID="k8s-pod-network.45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e" Workload="localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138eb0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-c5rwc", "timestamp":"2025-11-08 00:21:59.320594933 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:59.857226 containerd[1457]: 2025-11-08 00:21:59.320 [INFO][4794] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:59.857226 containerd[1457]: 2025-11-08 00:21:59.321 [INFO][4794] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:59.857226 containerd[1457]: 2025-11-08 00:21:59.321 [INFO][4794] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:21:59.857226 containerd[1457]: 2025-11-08 00:21:59.698 [INFO][4794] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e" host="localhost" Nov 8 00:21:59.857226 containerd[1457]: 2025-11-08 00:21:59.790 [INFO][4794] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:21:59.857226 containerd[1457]: 2025-11-08 00:21:59.802 [INFO][4794] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:21:59.857226 containerd[1457]: 2025-11-08 00:21:59.806 [INFO][4794] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:59.857226 containerd[1457]: 2025-11-08 00:21:59.808 [INFO][4794] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:59.857226 containerd[1457]: 2025-11-08 00:21:59.808 [INFO][4794] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e" host="localhost" Nov 8 00:21:59.857226 containerd[1457]: 2025-11-08 00:21:59.812 [INFO][4794] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e Nov 8 00:21:59.857226 containerd[1457]: 2025-11-08 00:21:59.816 [INFO][4794] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e" host="localhost" Nov 8 00:21:59.857226 containerd[1457]: 2025-11-08 00:21:59.826 [INFO][4794] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e" host="localhost" Nov 8 00:21:59.857226 containerd[1457]: 2025-11-08 00:21:59.826 [INFO][4794] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e" host="localhost" Nov 8 00:21:59.857226 containerd[1457]: 2025-11-08 00:21:59.826 [INFO][4794] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:59.857226 containerd[1457]: 2025-11-08 00:21:59.826 [INFO][4794] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e" HandleID="k8s-pod-network.45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e" Workload="localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0" Nov 8 00:21:59.858289 containerd[1457]: 2025-11-08 00:21:59.830 [INFO][4766] cni-plugin/k8s.go 418: Populated endpoint ContainerID="45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e" Namespace="kube-system" Pod="coredns-674b8bbfcf-c5rwc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4c9a132b-c373-4aff-a37f-8a647d110275", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-c5rwc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia67dca5dbfe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:59.858289 containerd[1457]: 2025-11-08 00:21:59.830 [INFO][4766] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e" Namespace="kube-system" Pod="coredns-674b8bbfcf-c5rwc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0" Nov 8 00:21:59.858289 containerd[1457]: 2025-11-08 00:21:59.830 [INFO][4766] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia67dca5dbfe ContainerID="45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e" Namespace="kube-system" Pod="coredns-674b8bbfcf-c5rwc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0" Nov 8 00:21:59.858289 containerd[1457]: 2025-11-08 00:21:59.838 [INFO][4766] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e" Namespace="kube-system" Pod="coredns-674b8bbfcf-c5rwc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0" Nov 8 00:21:59.858289 containerd[1457]: 2025-11-08 00:21:59.843 [INFO][4766] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e" Namespace="kube-system" Pod="coredns-674b8bbfcf-c5rwc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4c9a132b-c373-4aff-a37f-8a647d110275", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e", Pod:"coredns-674b8bbfcf-c5rwc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia67dca5dbfe", MAC:"c2:7f:38:4c:9f:16", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:59.858289 containerd[1457]: 2025-11-08 00:21:59.852 [INFO][4766] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e" Namespace="kube-system" Pod="coredns-674b8bbfcf-c5rwc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0" Nov 8 00:21:59.879056 containerd[1457]: time="2025-11-08T00:21:59.878754793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:59.879056 containerd[1457]: time="2025-11-08T00:21:59.878823351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:59.879056 containerd[1457]: time="2025-11-08T00:21:59.878833941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:59.879056 containerd[1457]: time="2025-11-08T00:21:59.878901959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:59.910012 systemd[1]: Started cri-containerd-45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e.scope - libcontainer container 45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e. Nov 8 00:21:59.923931 systemd-networkd[1388]: calif50b1d94194: Link UP Nov 8 00:21:59.924203 systemd-networkd[1388]: calif50b1d94194: Gained carrier Nov 8 00:21:59.935814 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:21:59.945564 containerd[1457]: 2025-11-08 00:21:59.701 [INFO][4780] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0 calico-apiserver-5b76c59587- calico-apiserver 0eac0b9a-9cb8-40ad-80af-819a17da25f0 1034 0 2025-11-08 00:21:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b76c59587 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b76c59587-pzwfg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif50b1d94194 [] [] }} ContainerID="3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893" Namespace="calico-apiserver" Pod="calico-apiserver-5b76c59587-pzwfg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b76c59587--pzwfg-" Nov 8 00:21:59.945564 containerd[1457]: 2025-11-08 00:21:59.701 [INFO][4780] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893" Namespace="calico-apiserver" Pod="calico-apiserver-5b76c59587-pzwfg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0" Nov 8 00:21:59.945564 containerd[1457]: 2025-11-08 00:21:59.821 [INFO][4805] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893" HandleID="k8s-pod-network.3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893" Workload="localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0" Nov 8 00:21:59.945564 containerd[1457]: 2025-11-08 00:21:59.821 [INFO][4805] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893" HandleID="k8s-pod-network.3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893" Workload="localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005821e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b76c59587-pzwfg", "timestamp":"2025-11-08 00:21:59.821618465 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:59.945564 containerd[1457]: 2025-11-08 00:21:59.821 [INFO][4805] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:59.945564 containerd[1457]: 2025-11-08 00:21:59.826 [INFO][4805] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:59.945564 containerd[1457]: 2025-11-08 00:21:59.826 [INFO][4805] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:21:59.945564 containerd[1457]: 2025-11-08 00:21:59.833 [INFO][4805] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893" host="localhost" Nov 8 00:21:59.945564 containerd[1457]: 2025-11-08 00:21:59.891 [INFO][4805] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:21:59.945564 containerd[1457]: 2025-11-08 00:21:59.897 [INFO][4805] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:21:59.945564 containerd[1457]: 2025-11-08 00:21:59.902 [INFO][4805] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:59.945564 containerd[1457]: 2025-11-08 00:21:59.904 [INFO][4805] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:59.945564 containerd[1457]: 2025-11-08 00:21:59.904 [INFO][4805] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893" host="localhost" Nov 8 00:21:59.945564 containerd[1457]: 2025-11-08 00:21:59.906 [INFO][4805] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893 Nov 8 00:21:59.945564 containerd[1457]: 2025-11-08 00:21:59.909 [INFO][4805] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893" host="localhost" Nov 8 00:21:59.945564 containerd[1457]: 2025-11-08 00:21:59.915 [INFO][4805] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893" host="localhost" Nov 8 00:21:59.945564 containerd[1457]: 2025-11-08 00:21:59.916 [INFO][4805] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893" host="localhost" Nov 8 00:21:59.945564 containerd[1457]: 2025-11-08 00:21:59.916 [INFO][4805] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:59.945564 containerd[1457]: 2025-11-08 00:21:59.916 [INFO][4805] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893" HandleID="k8s-pod-network.3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893" Workload="localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0" Nov 8 00:21:59.946432 containerd[1457]: 2025-11-08 00:21:59.920 [INFO][4780] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893" Namespace="calico-apiserver" Pod="calico-apiserver-5b76c59587-pzwfg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0", GenerateName:"calico-apiserver-5b76c59587-", Namespace:"calico-apiserver", SelfLink:"", UID:"0eac0b9a-9cb8-40ad-80af-819a17da25f0", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b76c59587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b76c59587-pzwfg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif50b1d94194", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:59.946432 containerd[1457]: 2025-11-08 00:21:59.920 [INFO][4780] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893" Namespace="calico-apiserver" Pod="calico-apiserver-5b76c59587-pzwfg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0" Nov 8 00:21:59.946432 containerd[1457]: 2025-11-08 00:21:59.920 [INFO][4780] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif50b1d94194 ContainerID="3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893" Namespace="calico-apiserver" Pod="calico-apiserver-5b76c59587-pzwfg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0" Nov 8 00:21:59.946432 containerd[1457]: 2025-11-08 00:21:59.925 [INFO][4780] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893" Namespace="calico-apiserver" Pod="calico-apiserver-5b76c59587-pzwfg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0" Nov 8 00:21:59.946432 containerd[1457]: 2025-11-08 00:21:59.926 [INFO][4780] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893" Namespace="calico-apiserver" Pod="calico-apiserver-5b76c59587-pzwfg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0", GenerateName:"calico-apiserver-5b76c59587-", Namespace:"calico-apiserver", SelfLink:"", UID:"0eac0b9a-9cb8-40ad-80af-819a17da25f0", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b76c59587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893", Pod:"calico-apiserver-5b76c59587-pzwfg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif50b1d94194", MAC:"d6:a6:9b:fc:8f:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:59.946432 containerd[1457]: 2025-11-08 00:21:59.939 [INFO][4780] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893" Namespace="calico-apiserver" Pod="calico-apiserver-5b76c59587-pzwfg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0" Nov 8 00:21:59.971098 containerd[1457]: time="2025-11-08T00:21:59.971050735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c5rwc,Uid:4c9a132b-c373-4aff-a37f-8a647d110275,Namespace:kube-system,Attempt:1,} returns sandbox id \"45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e\"" Nov 8 00:21:59.971911 containerd[1457]: time="2025-11-08T00:21:59.971614993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:59.971911 containerd[1457]: time="2025-11-08T00:21:59.971690004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:59.971911 containerd[1457]: time="2025-11-08T00:21:59.971705453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:59.972254 kubelet[2518]: E1108 00:21:59.972228 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:59.972932 containerd[1457]: time="2025-11-08T00:21:59.971793249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:59.980129 containerd[1457]: time="2025-11-08T00:21:59.979950973Z" level=info msg="CreateContainer within sandbox \"45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:21:59.998678 containerd[1457]: time="2025-11-08T00:21:59.998638254Z" level=info msg="CreateContainer within sandbox \"45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e057c8cc52a439923b35f82fc90be48f203572dbf5c11db81875db93d9a0431\"" Nov 8 00:21:59.999855 containerd[1457]: time="2025-11-08T00:21:59.999154472Z" level=info msg="StartContainer for \"1e057c8cc52a439923b35f82fc90be48f203572dbf5c11db81875db93d9a0431\"" Nov 8 00:22:00.004010 systemd[1]: Started cri-containerd-3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893.scope - libcontainer container 3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893. Nov 8 00:22:00.018442 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:22:00.028068 systemd[1]: Started cri-containerd-1e057c8cc52a439923b35f82fc90be48f203572dbf5c11db81875db93d9a0431.scope - libcontainer container 1e057c8cc52a439923b35f82fc90be48f203572dbf5c11db81875db93d9a0431. Nov 8 00:22:00.028145 systemd-networkd[1388]: cali589d23ba101: Gained IPv6LL Nov 8 00:22:00.040062 containerd[1457]: time="2025-11-08T00:22:00.039853788Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:00.041422 containerd[1457]: time="2025-11-08T00:22:00.041385522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:22:00.042552 containerd[1457]: time="2025-11-08T00:22:00.041556102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:22:00.044916 kubelet[2518]: E1108 00:22:00.044845 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:00.044916 kubelet[2518]: E1108 00:22:00.044913 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:00.045202 kubelet[2518]: E1108 00:22:00.045153 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4tz9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b76c59587-szgkh_calico-apiserver(ad60f746-d64e-4394-bbcc-99e4406b9d56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:00.046436 kubelet[2518]: E1108 00:22:00.046391 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b76c59587-szgkh" podUID="ad60f746-d64e-4394-bbcc-99e4406b9d56" Nov 8 00:22:00.052951 containerd[1457]: time="2025-11-08T00:22:00.052891367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:22:00.058200 containerd[1457]: time="2025-11-08T00:22:00.058065495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b76c59587-pzwfg,Uid:0eac0b9a-9cb8-40ad-80af-819a17da25f0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893\"" Nov 8 00:22:00.066615 containerd[1457]: time="2025-11-08T00:22:00.066587452Z" level=info msg="StartContainer for \"1e057c8cc52a439923b35f82fc90be48f203572dbf5c11db81875db93d9a0431\" returns successfully" Nov 8 00:22:00.219996 systemd-networkd[1388]: cali21988fea43c: Gained IPv6LL Nov 8 00:22:00.243267 kubelet[2518]: E1108 00:22:00.243023 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:00.250384 kubelet[2518]: E1108 00:22:00.250208 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b76c59587-szgkh" podUID="ad60f746-d64e-4394-bbcc-99e4406b9d56" Nov 8 00:22:00.250868 kubelet[2518]: E1108 00:22:00.250841 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:00.251774 kubelet[2518]: E1108 00:22:00.251734 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vwhc4" podUID="2c05d744-421f-40ca-8faf-61db719dbbcd" Nov 8 00:22:00.337468 kubelet[2518]: I1108 00:22:00.337393 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-c5rwc" podStartSLOduration=40.337372861 podStartE2EDuration="40.337372861s" podCreationTimestamp="2025-11-08 00:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:22:00.277859369 +0000 UTC m=+45.505665262" watchObservedRunningTime="2025-11-08 00:22:00.337372861 +0000 UTC m=+45.565178725" Nov 8 00:22:00.348034 systemd-networkd[1388]: cali97138940eef: Gained IPv6LL Nov 8 00:22:00.382292 containerd[1457]: time="2025-11-08T00:22:00.382227282Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:00.384510 containerd[1457]: time="2025-11-08T00:22:00.384314849Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:22:00.384510 containerd[1457]: time="2025-11-08T00:22:00.384375943Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:22:00.384641 kubelet[2518]: E1108 00:22:00.384482 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:22:00.384641 kubelet[2518]: E1108 00:22:00.384521 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:22:00.384974 kubelet[2518]: E1108 00:22:00.384705 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqdcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84655bbcf4-42fp6_calico-system(8afc460d-c7d5-4574-a129-acae64d116ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:00.385127 containerd[1457]: time="2025-11-08T00:22:00.384927078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:22:00.386050 kubelet[2518]: E1108 00:22:00.385973 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84655bbcf4-42fp6" podUID="8afc460d-c7d5-4574-a129-acae64d116ee" Nov 8 00:22:00.754870 containerd[1457]: time="2025-11-08T00:22:00.754791647Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:00.765451 containerd[1457]: time="2025-11-08T00:22:00.765385172Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:22:00.765451 containerd[1457]: time="2025-11-08T00:22:00.765420528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:22:00.765653 kubelet[2518]: E1108 00:22:00.765594 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:22:00.766068 kubelet[2518]: E1108 00:22:00.765658 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:22:00.766068 kubelet[2518]: E1108 00:22:00.765904 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glmpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-67lwd_calico-system(6fb889d5-2903-4e6b-a458-6fb9eecb4dcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:00.766247 containerd[1457]: time="2025-11-08T00:22:00.765952827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:22:00.768050 kubelet[2518]: E1108 00:22:00.768015 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-67lwd" podUID="6fb889d5-2903-4e6b-a458-6fb9eecb4dcd" Nov 8 00:22:01.132853 containerd[1457]: time="2025-11-08T00:22:01.132713144Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:01.155183 containerd[1457]: time="2025-11-08T00:22:01.155083541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:22:01.155183 containerd[1457]: time="2025-11-08T00:22:01.155131822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:22:01.155451 kubelet[2518]: E1108 00:22:01.155392 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:01.155507 kubelet[2518]: E1108 00:22:01.155460 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:01.155689 kubelet[2518]: E1108 00:22:01.155628 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fchkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b76c59587-pzwfg_calico-apiserver(0eac0b9a-9cb8-40ad-80af-819a17da25f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:01.156953 kubelet[2518]: E1108 00:22:01.156899 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b76c59587-pzwfg" podUID="0eac0b9a-9cb8-40ad-80af-819a17da25f0" Nov 8 00:22:01.244057 systemd-networkd[1388]: calif50b1d94194: Gained IPv6LL Nov 8 00:22:01.249919 kubelet[2518]: E1108 00:22:01.249519 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:01.249919 kubelet[2518]: E1108 00:22:01.249521 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b76c59587-pzwfg" podUID="0eac0b9a-9cb8-40ad-80af-819a17da25f0" Nov 8 00:22:01.249919 kubelet[2518]: E1108 00:22:01.249857 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:01.250658 kubelet[2518]: E1108 00:22:01.250619 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-67lwd" podUID="6fb889d5-2903-4e6b-a458-6fb9eecb4dcd" Nov 8 00:22:01.250977 kubelet[2518]: E1108 00:22:01.250931 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84655bbcf4-42fp6" podUID="8afc460d-c7d5-4574-a129-acae64d116ee" Nov 8 00:22:01.692057 systemd-networkd[1388]: calia67dca5dbfe: Gained IPv6LL Nov 8 00:22:02.257728 kubelet[2518]: E1108 00:22:02.257680 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:02.649093 systemd[1]: Started sshd@8-10.0.0.49:22-10.0.0.1:34024.service - OpenSSH per-connection server daemon (10.0.0.1:34024). Nov 8 00:22:02.695840 sshd[4969]: Accepted publickey for core from 10.0.0.1 port 34024 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:02.697660 sshd[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:02.701981 systemd-logind[1447]: New session 9 of user core. Nov 8 00:22:02.709956 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:22:02.847059 sshd[4969]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:02.851037 systemd[1]: sshd@8-10.0.0.49:22-10.0.0.1:34024.service: Deactivated successfully. Nov 8 00:22:02.852903 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:22:02.853674 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:22:02.854648 systemd-logind[1447]: Removed session 9. Nov 8 00:22:07.860618 systemd[1]: Started sshd@9-10.0.0.49:22-10.0.0.1:57778.service - OpenSSH per-connection server daemon (10.0.0.1:57778). Nov 8 00:22:07.892263 sshd[4993]: Accepted publickey for core from 10.0.0.1 port 57778 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:07.894039 sshd[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:07.898169 systemd-logind[1447]: New session 10 of user core. Nov 8 00:22:07.907933 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:22:08.114357 sshd[4993]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:08.119108 systemd[1]: sshd@9-10.0.0.49:22-10.0.0.1:57778.service: Deactivated successfully. Nov 8 00:22:08.121005 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:22:08.121688 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:22:08.122599 systemd-logind[1447]: Removed session 10. Nov 8 00:22:12.846096 containerd[1457]: time="2025-11-08T00:22:12.845614236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:22:13.127823 systemd[1]: Started sshd@10-10.0.0.49:22-10.0.0.1:59556.service - OpenSSH per-connection server daemon (10.0.0.1:59556). Nov 8 00:22:13.162424 sshd[5010]: Accepted publickey for core from 10.0.0.1 port 59556 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:13.164358 sshd[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:13.168648 systemd-logind[1447]: New session 11 of user core. Nov 8 00:22:13.177134 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:22:13.314211 sshd[5010]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:13.315688 containerd[1457]: time="2025-11-08T00:22:13.315648901Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:13.318332 containerd[1457]: time="2025-11-08T00:22:13.318285659Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:22:13.318518 containerd[1457]: time="2025-11-08T00:22:13.318374339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:22:13.318552 kubelet[2518]: E1108 00:22:13.318486 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:13.318552 kubelet[2518]: E1108 00:22:13.318541 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:13.321103 kubelet[2518]: E1108 00:22:13.318878 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4tz9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b76c59587-szgkh_calico-apiserver(ad60f746-d64e-4394-bbcc-99e4406b9d56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:13.321103 kubelet[2518]: E1108 00:22:13.320125 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b76c59587-szgkh" podUID="ad60f746-d64e-4394-bbcc-99e4406b9d56" Nov 8 00:22:13.321301 containerd[1457]: time="2025-11-08T00:22:13.319030681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:22:13.324222 systemd[1]: sshd@10-10.0.0.49:22-10.0.0.1:59556.service: Deactivated successfully. Nov 8 00:22:13.326065 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:22:13.328660 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:22:13.342291 systemd[1]: Started sshd@11-10.0.0.49:22-10.0.0.1:59564.service - OpenSSH per-connection server daemon (10.0.0.1:59564). Nov 8 00:22:13.343617 systemd-logind[1447]: Removed session 11. Nov 8 00:22:13.373585 sshd[5026]: Accepted publickey for core from 10.0.0.1 port 59564 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:13.375444 sshd[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:13.381194 systemd-logind[1447]: New session 12 of user core. Nov 8 00:22:13.386010 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:22:13.562416 sshd[5026]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:13.576190 systemd[1]: sshd@11-10.0.0.49:22-10.0.0.1:59564.service: Deactivated successfully. Nov 8 00:22:13.579583 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:22:13.581985 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:22:13.590452 systemd[1]: Started sshd@12-10.0.0.49:22-10.0.0.1:59578.service - OpenSSH per-connection server daemon (10.0.0.1:59578). Nov 8 00:22:13.592229 systemd-logind[1447]: Removed session 12. Nov 8 00:22:13.619280 sshd[5040]: Accepted publickey for core from 10.0.0.1 port 59578 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:13.621105 sshd[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:13.625265 systemd-logind[1447]: New session 13 of user core. Nov 8 00:22:13.642189 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:22:13.654923 containerd[1457]: time="2025-11-08T00:22:13.654874625Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:13.794506 containerd[1457]: time="2025-11-08T00:22:13.794440612Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:22:13.794676 containerd[1457]: time="2025-11-08T00:22:13.794544933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:22:13.794796 kubelet[2518]: E1108 00:22:13.794751 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:22:13.794858 kubelet[2518]: E1108 00:22:13.794834 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:22:13.795049 kubelet[2518]: E1108 00:22:13.794984 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2875f70c47664434af66c35207c6af08,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cqdcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84655bbcf4-42fp6_calico-system(8afc460d-c7d5-4574-a129-acae64d116ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:13.797275 containerd[1457]: time="2025-11-08T00:22:13.797241875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:22:13.920801 sshd[5040]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:13.925583 systemd[1]: sshd@12-10.0.0.49:22-10.0.0.1:59578.service: Deactivated successfully. Nov 8 00:22:13.927620 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:22:13.928612 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:22:13.929446 systemd-logind[1447]: Removed session 13. Nov 8 00:22:14.242088 containerd[1457]: time="2025-11-08T00:22:14.241923153Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:14.243230 containerd[1457]: time="2025-11-08T00:22:14.243175700Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:22:14.243288 containerd[1457]: time="2025-11-08T00:22:14.243209013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:22:14.243499 kubelet[2518]: E1108 00:22:14.243429 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:22:14.243558 kubelet[2518]: E1108 00:22:14.243505 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:22:14.243759 kubelet[2518]: E1108 00:22:14.243715 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqdcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84655bbcf4-42fp6_calico-system(8afc460d-c7d5-4574-a129-acae64d116ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:14.244023 containerd[1457]: time="2025-11-08T00:22:14.243977078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:22:14.244946 kubelet[2518]: E1108 00:22:14.244905 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84655bbcf4-42fp6" podUID="8afc460d-c7d5-4574-a129-acae64d116ee" Nov 8 00:22:14.600778 containerd[1457]: time="2025-11-08T00:22:14.600707870Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:14.601968 containerd[1457]: time="2025-11-08T00:22:14.601897424Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:22:14.602164 containerd[1457]: time="2025-11-08T00:22:14.601971025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:22:14.602192 kubelet[2518]: E1108 00:22:14.602151 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:22:14.602546 kubelet[2518]: E1108 00:22:14.602213 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:22:14.602632 kubelet[2518]: E1108 00:22:14.602520 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcjcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86f57bbc6c-bq2jn_calico-system(5ca7b27d-c4bf-4555-ac93-a9fa936c758c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:14.602833 containerd[1457]: time="2025-11-08T00:22:14.602602618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:22:14.603971 kubelet[2518]: E1108 00:22:14.603940 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86f57bbc6c-bq2jn" podUID="5ca7b27d-c4bf-4555-ac93-a9fa936c758c" Nov 8 00:22:14.834405 containerd[1457]: time="2025-11-08T00:22:14.834357664Z" level=info msg="StopPodSandbox for \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\"" Nov 8 00:22:14.910828 containerd[1457]: 2025-11-08 00:22:14.872 [WARNING][5064] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vwhc4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2c05d744-421f-40ca-8faf-61db719dbbcd", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8", Pod:"goldmane-666569f655-vwhc4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali589d23ba101", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:14.910828 containerd[1457]: 2025-11-08 00:22:14.872 [INFO][5064] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Nov 8 00:22:14.910828 containerd[1457]: 2025-11-08 00:22:14.872 [INFO][5064] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" iface="eth0" netns="" Nov 8 00:22:14.910828 containerd[1457]: 2025-11-08 00:22:14.872 [INFO][5064] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Nov 8 00:22:14.910828 containerd[1457]: 2025-11-08 00:22:14.872 [INFO][5064] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Nov 8 00:22:14.910828 containerd[1457]: 2025-11-08 00:22:14.895 [INFO][5075] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" HandleID="k8s-pod-network.904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Workload="localhost-k8s-goldmane--666569f655--vwhc4-eth0" Nov 8 00:22:14.910828 containerd[1457]: 2025-11-08 00:22:14.895 [INFO][5075] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:14.910828 containerd[1457]: 2025-11-08 00:22:14.895 [INFO][5075] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:14.910828 containerd[1457]: 2025-11-08 00:22:14.903 [WARNING][5075] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" HandleID="k8s-pod-network.904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Workload="localhost-k8s-goldmane--666569f655--vwhc4-eth0" Nov 8 00:22:14.910828 containerd[1457]: 2025-11-08 00:22:14.903 [INFO][5075] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" HandleID="k8s-pod-network.904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Workload="localhost-k8s-goldmane--666569f655--vwhc4-eth0" Nov 8 00:22:14.910828 containerd[1457]: 2025-11-08 00:22:14.904 [INFO][5075] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:14.910828 containerd[1457]: 2025-11-08 00:22:14.907 [INFO][5064] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Nov 8 00:22:14.910828 containerd[1457]: time="2025-11-08T00:22:14.910776302Z" level=info msg="TearDown network for sandbox \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\" successfully" Nov 8 00:22:14.910828 containerd[1457]: time="2025-11-08T00:22:14.910820928Z" level=info msg="StopPodSandbox for \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\" returns successfully" Nov 8 00:22:14.918412 containerd[1457]: time="2025-11-08T00:22:14.918367181Z" level=info msg="RemovePodSandbox for \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\"" Nov 8 00:22:14.920556 containerd[1457]: time="2025-11-08T00:22:14.920528772Z" level=info msg="Forcibly stopping sandbox \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\"" Nov 8 00:22:14.957689 containerd[1457]: time="2025-11-08T00:22:14.957634598Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:14.958976 containerd[1457]: time="2025-11-08T00:22:14.958938942Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:22:14.959100 containerd[1457]: time="2025-11-08T00:22:14.959018836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:22:14.959169 kubelet[2518]: E1108 00:22:14.959123 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:22:14.959216 kubelet[2518]: E1108 00:22:14.959183 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:22:14.959350 kubelet[2518]: E1108 00:22:14.959307 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glmpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-67lwd_calico-system(6fb889d5-2903-4e6b-a458-6fb9eecb4dcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:14.962514 containerd[1457]: time="2025-11-08T00:22:14.962486706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:22:14.992278 containerd[1457]: 2025-11-08 00:22:14.955 [WARNING][5093] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vwhc4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2c05d744-421f-40ca-8faf-61db719dbbcd", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9c9485cfbdb9869510498e7e41112816ec7946158fab9df56f644c2b08e192b8", Pod:"goldmane-666569f655-vwhc4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali589d23ba101", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:14.992278 containerd[1457]: 2025-11-08 00:22:14.955 [INFO][5093] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Nov 8 00:22:14.992278 containerd[1457]: 2025-11-08 00:22:14.955 [INFO][5093] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" iface="eth0" netns="" Nov 8 00:22:14.992278 containerd[1457]: 2025-11-08 00:22:14.955 [INFO][5093] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Nov 8 00:22:14.992278 containerd[1457]: 2025-11-08 00:22:14.955 [INFO][5093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Nov 8 00:22:14.992278 containerd[1457]: 2025-11-08 00:22:14.979 [INFO][5102] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" HandleID="k8s-pod-network.904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Workload="localhost-k8s-goldmane--666569f655--vwhc4-eth0" Nov 8 00:22:14.992278 containerd[1457]: 2025-11-08 00:22:14.979 [INFO][5102] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:14.992278 containerd[1457]: 2025-11-08 00:22:14.979 [INFO][5102] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:14.992278 containerd[1457]: 2025-11-08 00:22:14.985 [WARNING][5102] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" HandleID="k8s-pod-network.904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Workload="localhost-k8s-goldmane--666569f655--vwhc4-eth0" Nov 8 00:22:14.992278 containerd[1457]: 2025-11-08 00:22:14.985 [INFO][5102] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" HandleID="k8s-pod-network.904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Workload="localhost-k8s-goldmane--666569f655--vwhc4-eth0" Nov 8 00:22:14.992278 containerd[1457]: 2025-11-08 00:22:14.986 [INFO][5102] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:14.992278 containerd[1457]: 2025-11-08 00:22:14.989 [INFO][5093] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b" Nov 8 00:22:14.992684 containerd[1457]: time="2025-11-08T00:22:14.992320695Z" level=info msg="TearDown network for sandbox \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\" successfully" Nov 8 00:22:14.999568 containerd[1457]: time="2025-11-08T00:22:14.999540551Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:22:14.999612 containerd[1457]: time="2025-11-08T00:22:14.999587973Z" level=info msg="RemovePodSandbox \"904e232a254818c7ebded9f220abea80dc94b850866988356bdda68acaf1788b\" returns successfully" Nov 8 00:22:15.000203 containerd[1457]: time="2025-11-08T00:22:15.000156354Z" level=info msg="StopPodSandbox for \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\"" Nov 8 00:22:15.068332 containerd[1457]: 2025-11-08 00:22:15.035 [WARNING][5120] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0", GenerateName:"calico-apiserver-5b76c59587-", Namespace:"calico-apiserver", SelfLink:"", UID:"0eac0b9a-9cb8-40ad-80af-819a17da25f0", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b76c59587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893", Pod:"calico-apiserver-5b76c59587-pzwfg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif50b1d94194", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:15.068332 containerd[1457]: 2025-11-08 00:22:15.035 [INFO][5120] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Nov 8 00:22:15.068332 containerd[1457]: 2025-11-08 00:22:15.035 [INFO][5120] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" iface="eth0" netns="" Nov 8 00:22:15.068332 containerd[1457]: 2025-11-08 00:22:15.035 [INFO][5120] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Nov 8 00:22:15.068332 containerd[1457]: 2025-11-08 00:22:15.035 [INFO][5120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Nov 8 00:22:15.068332 containerd[1457]: 2025-11-08 00:22:15.055 [INFO][5129] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" HandleID="k8s-pod-network.e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Workload="localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0" Nov 8 00:22:15.068332 containerd[1457]: 2025-11-08 00:22:15.055 [INFO][5129] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:15.068332 containerd[1457]: 2025-11-08 00:22:15.056 [INFO][5129] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:15.068332 containerd[1457]: 2025-11-08 00:22:15.061 [WARNING][5129] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" HandleID="k8s-pod-network.e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Workload="localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0" Nov 8 00:22:15.068332 containerd[1457]: 2025-11-08 00:22:15.061 [INFO][5129] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" HandleID="k8s-pod-network.e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Workload="localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0" Nov 8 00:22:15.068332 containerd[1457]: 2025-11-08 00:22:15.063 [INFO][5129] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:15.068332 containerd[1457]: 2025-11-08 00:22:15.065 [INFO][5120] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Nov 8 00:22:15.068793 containerd[1457]: time="2025-11-08T00:22:15.068373316Z" level=info msg="TearDown network for sandbox \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\" successfully" Nov 8 00:22:15.068793 containerd[1457]: time="2025-11-08T00:22:15.068407292Z" level=info msg="StopPodSandbox for \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\" returns successfully" Nov 8 00:22:15.069044 containerd[1457]: time="2025-11-08T00:22:15.068994108Z" level=info msg="RemovePodSandbox for \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\"" Nov 8 00:22:15.069086 containerd[1457]: time="2025-11-08T00:22:15.069045757Z" level=info msg="Forcibly stopping sandbox \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\"" Nov 8 00:22:15.137111 containerd[1457]: 2025-11-08 00:22:15.101 [WARNING][5148] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0", GenerateName:"calico-apiserver-5b76c59587-", Namespace:"calico-apiserver", SelfLink:"", UID:"0eac0b9a-9cb8-40ad-80af-819a17da25f0", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b76c59587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3460e387119d8f8de44f4390c6dbead7a8aa841dc808eecd06f5fc10c4f0a893", Pod:"calico-apiserver-5b76c59587-pzwfg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif50b1d94194", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:15.137111 containerd[1457]: 2025-11-08 00:22:15.101 [INFO][5148] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Nov 8 00:22:15.137111 containerd[1457]: 2025-11-08 00:22:15.101 [INFO][5148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" iface="eth0" netns="" Nov 8 00:22:15.137111 containerd[1457]: 2025-11-08 00:22:15.101 [INFO][5148] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Nov 8 00:22:15.137111 containerd[1457]: 2025-11-08 00:22:15.101 [INFO][5148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Nov 8 00:22:15.137111 containerd[1457]: 2025-11-08 00:22:15.123 [INFO][5157] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" HandleID="k8s-pod-network.e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Workload="localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0" Nov 8 00:22:15.137111 containerd[1457]: 2025-11-08 00:22:15.123 [INFO][5157] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:15.137111 containerd[1457]: 2025-11-08 00:22:15.123 [INFO][5157] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:15.137111 containerd[1457]: 2025-11-08 00:22:15.130 [WARNING][5157] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" HandleID="k8s-pod-network.e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Workload="localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0" Nov 8 00:22:15.137111 containerd[1457]: 2025-11-08 00:22:15.130 [INFO][5157] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" HandleID="k8s-pod-network.e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Workload="localhost-k8s-calico--apiserver--5b76c59587--pzwfg-eth0" Nov 8 00:22:15.137111 containerd[1457]: 2025-11-08 00:22:15.131 [INFO][5157] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:15.137111 containerd[1457]: 2025-11-08 00:22:15.134 [INFO][5148] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5" Nov 8 00:22:15.137616 containerd[1457]: time="2025-11-08T00:22:15.137115075Z" level=info msg="TearDown network for sandbox \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\" successfully" Nov 8 00:22:15.151525 containerd[1457]: time="2025-11-08T00:22:15.151490527Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:22:15.151603 containerd[1457]: time="2025-11-08T00:22:15.151537948Z" level=info msg="RemovePodSandbox \"e00fc8801405cb1fc3361faf79554f27da0f5006cc1c0b2763a6856e0ce6b8a5\" returns successfully" Nov 8 00:22:15.152064 containerd[1457]: time="2025-11-08T00:22:15.152025164Z" level=info msg="StopPodSandbox for \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\"" Nov 8 00:22:15.218068 containerd[1457]: 2025-11-08 00:22:15.185 [WARNING][5175] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--67lwd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6fb889d5-2903-4e6b-a458-6fb9eecb4dcd", ResourceVersion:"1217", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828", Pod:"csi-node-driver-67lwd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali70c7f638d2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:15.218068 containerd[1457]: 2025-11-08 00:22:15.185 [INFO][5175] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Nov 8 00:22:15.218068 containerd[1457]: 2025-11-08 00:22:15.185 [INFO][5175] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" iface="eth0" netns="" Nov 8 00:22:15.218068 containerd[1457]: 2025-11-08 00:22:15.185 [INFO][5175] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Nov 8 00:22:15.218068 containerd[1457]: 2025-11-08 00:22:15.185 [INFO][5175] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Nov 8 00:22:15.218068 containerd[1457]: 2025-11-08 00:22:15.205 [INFO][5183] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" HandleID="k8s-pod-network.a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Workload="localhost-k8s-csi--node--driver--67lwd-eth0" Nov 8 00:22:15.218068 containerd[1457]: 2025-11-08 00:22:15.205 [INFO][5183] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:15.218068 containerd[1457]: 2025-11-08 00:22:15.205 [INFO][5183] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:15.218068 containerd[1457]: 2025-11-08 00:22:15.211 [WARNING][5183] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" HandleID="k8s-pod-network.a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Workload="localhost-k8s-csi--node--driver--67lwd-eth0" Nov 8 00:22:15.218068 containerd[1457]: 2025-11-08 00:22:15.211 [INFO][5183] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" HandleID="k8s-pod-network.a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Workload="localhost-k8s-csi--node--driver--67lwd-eth0" Nov 8 00:22:15.218068 containerd[1457]: 2025-11-08 00:22:15.212 [INFO][5183] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:15.218068 containerd[1457]: 2025-11-08 00:22:15.215 [INFO][5175] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Nov 8 00:22:15.218068 containerd[1457]: time="2025-11-08T00:22:15.218034477Z" level=info msg="TearDown network for sandbox \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\" successfully" Nov 8 00:22:15.218068 containerd[1457]: time="2025-11-08T00:22:15.218060178Z" level=info msg="StopPodSandbox for \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\" returns successfully" Nov 8 00:22:15.218533 containerd[1457]: time="2025-11-08T00:22:15.218513808Z" level=info msg="RemovePodSandbox for \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\"" Nov 8 00:22:15.218558 containerd[1457]: time="2025-11-08T00:22:15.218538826Z" level=info msg="Forcibly stopping sandbox \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\"" Nov 8 00:22:15.286499 containerd[1457]: 2025-11-08 00:22:15.249 [WARNING][5201] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--67lwd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6fb889d5-2903-4e6b-a458-6fb9eecb4dcd", ResourceVersion:"1217", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"888c5775f50dbe233426af37765c0e63be4b731f0a56ab95ab210b6bd850e828", Pod:"csi-node-driver-67lwd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali70c7f638d2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:15.286499 containerd[1457]: 2025-11-08 00:22:15.249 [INFO][5201] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Nov 8 00:22:15.286499 containerd[1457]: 2025-11-08 00:22:15.249 [INFO][5201] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" iface="eth0" netns="" Nov 8 00:22:15.286499 containerd[1457]: 2025-11-08 00:22:15.250 [INFO][5201] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Nov 8 00:22:15.286499 containerd[1457]: 2025-11-08 00:22:15.250 [INFO][5201] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Nov 8 00:22:15.286499 containerd[1457]: 2025-11-08 00:22:15.270 [INFO][5211] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" HandleID="k8s-pod-network.a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Workload="localhost-k8s-csi--node--driver--67lwd-eth0" Nov 8 00:22:15.286499 containerd[1457]: 2025-11-08 00:22:15.270 [INFO][5211] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:15.286499 containerd[1457]: 2025-11-08 00:22:15.270 [INFO][5211] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:15.286499 containerd[1457]: 2025-11-08 00:22:15.275 [WARNING][5211] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" HandleID="k8s-pod-network.a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Workload="localhost-k8s-csi--node--driver--67lwd-eth0" Nov 8 00:22:15.286499 containerd[1457]: 2025-11-08 00:22:15.275 [INFO][5211] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" HandleID="k8s-pod-network.a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Workload="localhost-k8s-csi--node--driver--67lwd-eth0" Nov 8 00:22:15.286499 containerd[1457]: 2025-11-08 00:22:15.277 [INFO][5211] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:15.286499 containerd[1457]: 2025-11-08 00:22:15.282 [INFO][5201] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19" Nov 8 00:22:15.287322 containerd[1457]: time="2025-11-08T00:22:15.286565392Z" level=info msg="TearDown network for sandbox \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\" successfully" Nov 8 00:22:15.290384 containerd[1457]: time="2025-11-08T00:22:15.290353363Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:22:15.290457 containerd[1457]: time="2025-11-08T00:22:15.290396325Z" level=info msg="RemovePodSandbox \"a19254ec8fe721ad7a74d51d11140a84e2bc3f46f0b4358a54850ec82a68fc19\" returns successfully" Nov 8 00:22:15.290798 containerd[1457]: time="2025-11-08T00:22:15.290773429Z" level=info msg="StopPodSandbox for \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\"" Nov 8 00:22:15.358641 containerd[1457]: 2025-11-08 00:22:15.324 [WARNING][5229] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0", GenerateName:"calico-apiserver-5b76c59587-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad60f746-d64e-4394-bbcc-99e4406b9d56", ResourceVersion:"1189", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b76c59587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59", Pod:"calico-apiserver-5b76c59587-szgkh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ffef67e1f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:15.358641 containerd[1457]: 2025-11-08 00:22:15.324 [INFO][5229] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Nov 8 00:22:15.358641 containerd[1457]: 2025-11-08 00:22:15.324 [INFO][5229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" iface="eth0" netns="" Nov 8 00:22:15.358641 containerd[1457]: 2025-11-08 00:22:15.324 [INFO][5229] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Nov 8 00:22:15.358641 containerd[1457]: 2025-11-08 00:22:15.324 [INFO][5229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Nov 8 00:22:15.358641 containerd[1457]: 2025-11-08 00:22:15.346 [INFO][5238] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" HandleID="k8s-pod-network.4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Workload="localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0" Nov 8 00:22:15.358641 containerd[1457]: 2025-11-08 00:22:15.346 [INFO][5238] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:15.358641 containerd[1457]: 2025-11-08 00:22:15.346 [INFO][5238] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:15.358641 containerd[1457]: 2025-11-08 00:22:15.351 [WARNING][5238] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" HandleID="k8s-pod-network.4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Workload="localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0" Nov 8 00:22:15.358641 containerd[1457]: 2025-11-08 00:22:15.351 [INFO][5238] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" HandleID="k8s-pod-network.4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Workload="localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0" Nov 8 00:22:15.358641 containerd[1457]: 2025-11-08 00:22:15.353 [INFO][5238] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:15.358641 containerd[1457]: 2025-11-08 00:22:15.355 [INFO][5229] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Nov 8 00:22:15.359213 containerd[1457]: time="2025-11-08T00:22:15.358685817Z" level=info msg="TearDown network for sandbox \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\" successfully" Nov 8 00:22:15.359213 containerd[1457]: time="2025-11-08T00:22:15.358720162Z" level=info msg="StopPodSandbox for \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\" returns successfully" Nov 8 00:22:15.359277 containerd[1457]: time="2025-11-08T00:22:15.359256352Z" level=info msg="RemovePodSandbox for \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\"" Nov 8 00:22:15.359308 containerd[1457]: time="2025-11-08T00:22:15.359287401Z" level=info msg="Forcibly stopping sandbox \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\"" Nov 8 00:22:15.417857 containerd[1457]: time="2025-11-08T00:22:15.417784086Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:15.419032 containerd[1457]: time="2025-11-08T00:22:15.418987607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:22:15.419148 containerd[1457]: time="2025-11-08T00:22:15.419055337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:22:15.419279 kubelet[2518]: E1108 00:22:15.419227 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:22:15.419357 kubelet[2518]: E1108 00:22:15.419290 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:22:15.419456 kubelet[2518]: E1108 00:22:15.419421 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glmpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-67lwd_calico-system(6fb889d5-2903-4e6b-a458-6fb9eecb4dcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:15.420697 kubelet[2518]: E1108 00:22:15.420652 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-67lwd" podUID="6fb889d5-2903-4e6b-a458-6fb9eecb4dcd" Nov 8 00:22:15.425829 containerd[1457]: 2025-11-08 00:22:15.391 [WARNING][5256] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0", GenerateName:"calico-apiserver-5b76c59587-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad60f746-d64e-4394-bbcc-99e4406b9d56", ResourceVersion:"1189", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b76c59587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f2fcb56d9b0bb5833427d96b6e46d45afa3c2a592c2b80e0b7f5c87d45f9a59", Pod:"calico-apiserver-5b76c59587-szgkh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ffef67e1f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:15.425829 containerd[1457]: 2025-11-08 00:22:15.392 [INFO][5256] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Nov 8 00:22:15.425829 containerd[1457]: 2025-11-08 00:22:15.392 [INFO][5256] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" iface="eth0" netns="" Nov 8 00:22:15.425829 containerd[1457]: 2025-11-08 00:22:15.392 [INFO][5256] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Nov 8 00:22:15.425829 containerd[1457]: 2025-11-08 00:22:15.392 [INFO][5256] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Nov 8 00:22:15.425829 containerd[1457]: 2025-11-08 00:22:15.413 [INFO][5265] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" HandleID="k8s-pod-network.4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Workload="localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0" Nov 8 00:22:15.425829 containerd[1457]: 2025-11-08 00:22:15.413 [INFO][5265] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:15.425829 containerd[1457]: 2025-11-08 00:22:15.413 [INFO][5265] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:15.425829 containerd[1457]: 2025-11-08 00:22:15.418 [WARNING][5265] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" HandleID="k8s-pod-network.4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Workload="localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0" Nov 8 00:22:15.425829 containerd[1457]: 2025-11-08 00:22:15.418 [INFO][5265] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" HandleID="k8s-pod-network.4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Workload="localhost-k8s-calico--apiserver--5b76c59587--szgkh-eth0" Nov 8 00:22:15.425829 containerd[1457]: 2025-11-08 00:22:15.420 [INFO][5265] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:15.425829 containerd[1457]: 2025-11-08 00:22:15.422 [INFO][5256] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0" Nov 8 00:22:15.425829 containerd[1457]: time="2025-11-08T00:22:15.425739867Z" level=info msg="TearDown network for sandbox \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\" successfully" Nov 8 00:22:15.429969 containerd[1457]: time="2025-11-08T00:22:15.429944126Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:22:15.430036 containerd[1457]: time="2025-11-08T00:22:15.429987348Z" level=info msg="RemovePodSandbox \"4ded067b51da9b95242a66e286da1b08f547e7293de76c3e2fbb7f4f76bc24c0\" returns successfully" Nov 8 00:22:15.430488 containerd[1457]: time="2025-11-08T00:22:15.430459775Z" level=info msg="StopPodSandbox for \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\"" Nov 8 00:22:15.495587 containerd[1457]: 2025-11-08 00:22:15.461 [WARNING][5282] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0", GenerateName:"calico-kube-controllers-86f57bbc6c-", Namespace:"calico-system", SelfLink:"", UID:"5ca7b27d-c4bf-4555-ac93-a9fa936c758c", ResourceVersion:"1211", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86f57bbc6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62", Pod:"calico-kube-controllers-86f57bbc6c-bq2jn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95e44acbb14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:15.495587 containerd[1457]: 2025-11-08 00:22:15.461 [INFO][5282] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Nov 8 00:22:15.495587 containerd[1457]: 2025-11-08 00:22:15.461 [INFO][5282] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" iface="eth0" netns="" Nov 8 00:22:15.495587 containerd[1457]: 2025-11-08 00:22:15.461 [INFO][5282] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Nov 8 00:22:15.495587 containerd[1457]: 2025-11-08 00:22:15.461 [INFO][5282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Nov 8 00:22:15.495587 containerd[1457]: 2025-11-08 00:22:15.482 [INFO][5290] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" HandleID="k8s-pod-network.193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Workload="localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0" Nov 8 00:22:15.495587 containerd[1457]: 2025-11-08 00:22:15.482 [INFO][5290] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:15.495587 containerd[1457]: 2025-11-08 00:22:15.482 [INFO][5290] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:15.495587 containerd[1457]: 2025-11-08 00:22:15.489 [WARNING][5290] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" HandleID="k8s-pod-network.193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Workload="localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0" Nov 8 00:22:15.495587 containerd[1457]: 2025-11-08 00:22:15.489 [INFO][5290] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" HandleID="k8s-pod-network.193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Workload="localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0" Nov 8 00:22:15.495587 containerd[1457]: 2025-11-08 00:22:15.490 [INFO][5290] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:15.495587 containerd[1457]: 2025-11-08 00:22:15.492 [INFO][5282] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Nov 8 00:22:15.495587 containerd[1457]: time="2025-11-08T00:22:15.495562090Z" level=info msg="TearDown network for sandbox \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\" successfully" Nov 8 00:22:15.495991 containerd[1457]: time="2025-11-08T00:22:15.495600464Z" level=info msg="StopPodSandbox for \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\" returns successfully" Nov 8 00:22:15.496267 containerd[1457]: time="2025-11-08T00:22:15.496230182Z" level=info msg="RemovePodSandbox for \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\"" Nov 8 00:22:15.496267 containerd[1457]: time="2025-11-08T00:22:15.496264958Z" level=info msg="Forcibly stopping sandbox \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\"" Nov 8 00:22:15.559486 containerd[1457]: 2025-11-08 00:22:15.526 [WARNING][5309] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0", GenerateName:"calico-kube-controllers-86f57bbc6c-", Namespace:"calico-system", SelfLink:"", UID:"5ca7b27d-c4bf-4555-ac93-a9fa936c758c", ResourceVersion:"1211", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86f57bbc6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dcbcd8b68c9145bebf3ce3605acd7f8cd03474f1a6b5a776d71cd3fff4c99d62", Pod:"calico-kube-controllers-86f57bbc6c-bq2jn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95e44acbb14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:15.559486 containerd[1457]: 2025-11-08 00:22:15.526 [INFO][5309] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Nov 8 00:22:15.559486 containerd[1457]: 2025-11-08 00:22:15.526 [INFO][5309] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" iface="eth0" netns="" Nov 8 00:22:15.559486 containerd[1457]: 2025-11-08 00:22:15.526 [INFO][5309] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Nov 8 00:22:15.559486 containerd[1457]: 2025-11-08 00:22:15.526 [INFO][5309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Nov 8 00:22:15.559486 containerd[1457]: 2025-11-08 00:22:15.545 [INFO][5318] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" HandleID="k8s-pod-network.193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Workload="localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0" Nov 8 00:22:15.559486 containerd[1457]: 2025-11-08 00:22:15.545 [INFO][5318] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:15.559486 containerd[1457]: 2025-11-08 00:22:15.545 [INFO][5318] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:15.559486 containerd[1457]: 2025-11-08 00:22:15.551 [WARNING][5318] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" HandleID="k8s-pod-network.193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Workload="localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0" Nov 8 00:22:15.559486 containerd[1457]: 2025-11-08 00:22:15.551 [INFO][5318] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" HandleID="k8s-pod-network.193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Workload="localhost-k8s-calico--kube--controllers--86f57bbc6c--bq2jn-eth0" Nov 8 00:22:15.559486 containerd[1457]: 2025-11-08 00:22:15.552 [INFO][5318] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:15.559486 containerd[1457]: 2025-11-08 00:22:15.556 [INFO][5309] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965" Nov 8 00:22:15.559906 containerd[1457]: time="2025-11-08T00:22:15.559555004Z" level=info msg="TearDown network for sandbox \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\" successfully" Nov 8 00:22:15.564261 containerd[1457]: time="2025-11-08T00:22:15.564234245Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:22:15.564315 containerd[1457]: time="2025-11-08T00:22:15.564276597Z" level=info msg="RemovePodSandbox \"193475f3323abffa3e65f63e0e3c5ff3b4712ace539dae2151fd38827d72f965\" returns successfully" Nov 8 00:22:15.564727 containerd[1457]: time="2025-11-08T00:22:15.564700791Z" level=info msg="StopPodSandbox for \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\"" Nov 8 00:22:15.624727 containerd[1457]: 2025-11-08 00:22:15.594 [WARNING][5335] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--htskj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"da9743a8-c863-487c-b161-786bc9c10f6c", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59", Pod:"coredns-674b8bbfcf-htskj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97138940eef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:15.624727 containerd[1457]: 2025-11-08 00:22:15.595 [INFO][5335] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Nov 8 00:22:15.624727 containerd[1457]: 2025-11-08 00:22:15.595 [INFO][5335] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" iface="eth0" netns="" Nov 8 00:22:15.624727 containerd[1457]: 2025-11-08 00:22:15.595 [INFO][5335] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Nov 8 00:22:15.624727 containerd[1457]: 2025-11-08 00:22:15.595 [INFO][5335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Nov 8 00:22:15.624727 containerd[1457]: 2025-11-08 00:22:15.612 [INFO][5343] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" HandleID="k8s-pod-network.169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Workload="localhost-k8s-coredns--674b8bbfcf--htskj-eth0" Nov 8 00:22:15.624727 containerd[1457]: 2025-11-08 00:22:15.612 [INFO][5343] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:15.624727 containerd[1457]: 2025-11-08 00:22:15.612 [INFO][5343] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:15.624727 containerd[1457]: 2025-11-08 00:22:15.618 [WARNING][5343] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" HandleID="k8s-pod-network.169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Workload="localhost-k8s-coredns--674b8bbfcf--htskj-eth0" Nov 8 00:22:15.624727 containerd[1457]: 2025-11-08 00:22:15.618 [INFO][5343] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" HandleID="k8s-pod-network.169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Workload="localhost-k8s-coredns--674b8bbfcf--htskj-eth0" Nov 8 00:22:15.624727 containerd[1457]: 2025-11-08 00:22:15.619 [INFO][5343] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:15.624727 containerd[1457]: 2025-11-08 00:22:15.622 [INFO][5335] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Nov 8 00:22:15.625156 containerd[1457]: time="2025-11-08T00:22:15.624766297Z" level=info msg="TearDown network for sandbox \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\" successfully" Nov 8 00:22:15.625156 containerd[1457]: time="2025-11-08T00:22:15.624791416Z" level=info msg="StopPodSandbox for \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\" returns successfully" Nov 8 00:22:15.625368 containerd[1457]: time="2025-11-08T00:22:15.625341742Z" level=info msg="RemovePodSandbox for \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\"" Nov 8 00:22:15.625414 containerd[1457]: time="2025-11-08T00:22:15.625376639Z" level=info msg="Forcibly stopping sandbox \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\"" Nov 8 00:22:15.691756 containerd[1457]: 2025-11-08 00:22:15.657 [WARNING][5361] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--htskj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"da9743a8-c863-487c-b161-786bc9c10f6c", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a685c30dfe8260418a7509abf0edfca69b5eec81618a0562e8fc6934e6ad3c59", Pod:"coredns-674b8bbfcf-htskj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97138940eef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:15.691756 containerd[1457]: 2025-11-08 00:22:15.657 [INFO][5361] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Nov 8 00:22:15.691756 containerd[1457]: 2025-11-08 00:22:15.657 [INFO][5361] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" iface="eth0" netns="" Nov 8 00:22:15.691756 containerd[1457]: 2025-11-08 00:22:15.657 [INFO][5361] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Nov 8 00:22:15.691756 containerd[1457]: 2025-11-08 00:22:15.657 [INFO][5361] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Nov 8 00:22:15.691756 containerd[1457]: 2025-11-08 00:22:15.678 [INFO][5370] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" HandleID="k8s-pod-network.169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Workload="localhost-k8s-coredns--674b8bbfcf--htskj-eth0" Nov 8 00:22:15.691756 containerd[1457]: 2025-11-08 00:22:15.678 [INFO][5370] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:15.691756 containerd[1457]: 2025-11-08 00:22:15.678 [INFO][5370] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:15.691756 containerd[1457]: 2025-11-08 00:22:15.684 [WARNING][5370] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" HandleID="k8s-pod-network.169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Workload="localhost-k8s-coredns--674b8bbfcf--htskj-eth0" Nov 8 00:22:15.691756 containerd[1457]: 2025-11-08 00:22:15.684 [INFO][5370] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" HandleID="k8s-pod-network.169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Workload="localhost-k8s-coredns--674b8bbfcf--htskj-eth0" Nov 8 00:22:15.691756 containerd[1457]: 2025-11-08 00:22:15.686 [INFO][5370] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:15.691756 containerd[1457]: 2025-11-08 00:22:15.688 [INFO][5361] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc" Nov 8 00:22:15.692203 containerd[1457]: time="2025-11-08T00:22:15.691789067Z" level=info msg="TearDown network for sandbox \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\" successfully" Nov 8 00:22:15.695736 containerd[1457]: time="2025-11-08T00:22:15.695700534Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:22:15.695791 containerd[1457]: time="2025-11-08T00:22:15.695752634Z" level=info msg="RemovePodSandbox \"169a2da8656a5f9c34c467db2f47c29f0e5eb123b73aed9218e92b05860c2ddc\" returns successfully" Nov 8 00:22:15.696400 containerd[1457]: time="2025-11-08T00:22:15.696347547Z" level=info msg="StopPodSandbox for \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\"" Nov 8 00:22:15.763284 containerd[1457]: 2025-11-08 00:22:15.728 [WARNING][5387] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4c9a132b-c373-4aff-a37f-8a647d110275", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e", Pod:"coredns-674b8bbfcf-c5rwc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia67dca5dbfe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:15.763284 containerd[1457]: 2025-11-08 00:22:15.729 [INFO][5387] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Nov 8 00:22:15.763284 containerd[1457]: 2025-11-08 00:22:15.729 [INFO][5387] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" iface="eth0" netns="" Nov 8 00:22:15.763284 containerd[1457]: 2025-11-08 00:22:15.729 [INFO][5387] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Nov 8 00:22:15.763284 containerd[1457]: 2025-11-08 00:22:15.729 [INFO][5387] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Nov 8 00:22:15.763284 containerd[1457]: 2025-11-08 00:22:15.750 [INFO][5396] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" HandleID="k8s-pod-network.7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Workload="localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0" Nov 8 00:22:15.763284 containerd[1457]: 2025-11-08 00:22:15.750 [INFO][5396] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:15.763284 containerd[1457]: 2025-11-08 00:22:15.750 [INFO][5396] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:15.763284 containerd[1457]: 2025-11-08 00:22:15.756 [WARNING][5396] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" HandleID="k8s-pod-network.7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Workload="localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0" Nov 8 00:22:15.763284 containerd[1457]: 2025-11-08 00:22:15.756 [INFO][5396] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" HandleID="k8s-pod-network.7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Workload="localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0" Nov 8 00:22:15.763284 containerd[1457]: 2025-11-08 00:22:15.757 [INFO][5396] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:15.763284 containerd[1457]: 2025-11-08 00:22:15.760 [INFO][5387] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Nov 8 00:22:15.763748 containerd[1457]: time="2025-11-08T00:22:15.763344947Z" level=info msg="TearDown network for sandbox \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\" successfully" Nov 8 00:22:15.763748 containerd[1457]: time="2025-11-08T00:22:15.763383371Z" level=info msg="StopPodSandbox for \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\" returns successfully" Nov 8 00:22:15.764021 containerd[1457]: time="2025-11-08T00:22:15.763970077Z" level=info msg="RemovePodSandbox for \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\"" Nov 8 00:22:15.764064 containerd[1457]: time="2025-11-08T00:22:15.764027017Z" level=info msg="Forcibly stopping sandbox \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\"" Nov 8 00:22:15.835390 containerd[1457]: 2025-11-08 00:22:15.801 [WARNING][5414] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4c9a132b-c373-4aff-a37f-8a647d110275", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"45082ca03ad4bba58a7ba6beeeb9a3ed00d3c3b8ad39c2e8b91448d0c108194e", Pod:"coredns-674b8bbfcf-c5rwc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia67dca5dbfe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:15.835390 containerd[1457]: 2025-11-08 00:22:15.801 [INFO][5414] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Nov 8 00:22:15.835390 containerd[1457]: 2025-11-08 00:22:15.801 [INFO][5414] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" iface="eth0" netns="" Nov 8 00:22:15.835390 containerd[1457]: 2025-11-08 00:22:15.801 [INFO][5414] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Nov 8 00:22:15.835390 containerd[1457]: 2025-11-08 00:22:15.801 [INFO][5414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Nov 8 00:22:15.835390 containerd[1457]: 2025-11-08 00:22:15.821 [INFO][5422] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" HandleID="k8s-pod-network.7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Workload="localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0" Nov 8 00:22:15.835390 containerd[1457]: 2025-11-08 00:22:15.821 [INFO][5422] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:15.835390 containerd[1457]: 2025-11-08 00:22:15.821 [INFO][5422] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:15.835390 containerd[1457]: 2025-11-08 00:22:15.828 [WARNING][5422] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" HandleID="k8s-pod-network.7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Workload="localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0" Nov 8 00:22:15.835390 containerd[1457]: 2025-11-08 00:22:15.828 [INFO][5422] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" HandleID="k8s-pod-network.7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Workload="localhost-k8s-coredns--674b8bbfcf--c5rwc-eth0" Nov 8 00:22:15.835390 containerd[1457]: 2025-11-08 00:22:15.829 [INFO][5422] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:15.835390 containerd[1457]: 2025-11-08 00:22:15.832 [INFO][5414] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59" Nov 8 00:22:15.836098 containerd[1457]: time="2025-11-08T00:22:15.835427539Z" level=info msg="TearDown network for sandbox \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\" successfully" Nov 8 00:22:15.839830 containerd[1457]: time="2025-11-08T00:22:15.839763501Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:22:15.839901 containerd[1457]: time="2025-11-08T00:22:15.839840128Z" level=info msg="RemovePodSandbox \"7328e3d26cb444521473df3b4a09d1fa4ceefa7b6bbda52b51ef59a17c702c59\" returns successfully" Nov 8 00:22:15.840426 containerd[1457]: time="2025-11-08T00:22:15.840382449Z" level=info msg="StopPodSandbox for \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\"" Nov 8 00:22:15.844961 containerd[1457]: time="2025-11-08T00:22:15.844927172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:22:15.909722 containerd[1457]: 2025-11-08 00:22:15.874 [WARNING][5439] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" WorkloadEndpoint="localhost-k8s-whisker--68687457bd--6qht5-eth0" Nov 8 00:22:15.909722 containerd[1457]: 2025-11-08 00:22:15.875 [INFO][5439] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Nov 8 00:22:15.909722 containerd[1457]: 2025-11-08 00:22:15.875 [INFO][5439] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" iface="eth0" netns="" Nov 8 00:22:15.909722 containerd[1457]: 2025-11-08 00:22:15.875 [INFO][5439] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Nov 8 00:22:15.909722 containerd[1457]: 2025-11-08 00:22:15.875 [INFO][5439] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Nov 8 00:22:15.909722 containerd[1457]: 2025-11-08 00:22:15.896 [INFO][5448] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" HandleID="k8s-pod-network.56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Workload="localhost-k8s-whisker--68687457bd--6qht5-eth0" Nov 8 00:22:15.909722 containerd[1457]: 2025-11-08 00:22:15.897 [INFO][5448] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:15.909722 containerd[1457]: 2025-11-08 00:22:15.897 [INFO][5448] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:15.909722 containerd[1457]: 2025-11-08 00:22:15.902 [WARNING][5448] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" HandleID="k8s-pod-network.56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Workload="localhost-k8s-whisker--68687457bd--6qht5-eth0" Nov 8 00:22:15.909722 containerd[1457]: 2025-11-08 00:22:15.902 [INFO][5448] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" HandleID="k8s-pod-network.56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Workload="localhost-k8s-whisker--68687457bd--6qht5-eth0" Nov 8 00:22:15.909722 containerd[1457]: 2025-11-08 00:22:15.903 [INFO][5448] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:15.909722 containerd[1457]: 2025-11-08 00:22:15.906 [INFO][5439] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Nov 8 00:22:15.910209 containerd[1457]: time="2025-11-08T00:22:15.909764528Z" level=info msg="TearDown network for sandbox \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\" successfully" Nov 8 00:22:15.910209 containerd[1457]: time="2025-11-08T00:22:15.909791699Z" level=info msg="StopPodSandbox for \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\" returns successfully" Nov 8 00:22:15.910439 containerd[1457]: time="2025-11-08T00:22:15.910383275Z" level=info msg="RemovePodSandbox for \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\"" Nov 8 00:22:15.910439 containerd[1457]: time="2025-11-08T00:22:15.910428902Z" level=info msg="Forcibly stopping sandbox \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\"" Nov 8 00:22:15.985505 containerd[1457]: 2025-11-08 00:22:15.945 [WARNING][5465] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" WorkloadEndpoint="localhost-k8s-whisker--68687457bd--6qht5-eth0" Nov 8 00:22:15.985505 containerd[1457]: 2025-11-08 00:22:15.946 [INFO][5465] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Nov 8 00:22:15.985505 containerd[1457]: 2025-11-08 00:22:15.946 [INFO][5465] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" iface="eth0" netns="" Nov 8 00:22:15.985505 containerd[1457]: 2025-11-08 00:22:15.946 [INFO][5465] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Nov 8 00:22:15.985505 containerd[1457]: 2025-11-08 00:22:15.946 [INFO][5465] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Nov 8 00:22:15.985505 containerd[1457]: 2025-11-08 00:22:15.972 [INFO][5474] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" HandleID="k8s-pod-network.56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Workload="localhost-k8s-whisker--68687457bd--6qht5-eth0" Nov 8 00:22:15.985505 containerd[1457]: 2025-11-08 00:22:15.973 [INFO][5474] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:15.985505 containerd[1457]: 2025-11-08 00:22:15.973 [INFO][5474] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:15.985505 containerd[1457]: 2025-11-08 00:22:15.978 [WARNING][5474] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" HandleID="k8s-pod-network.56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Workload="localhost-k8s-whisker--68687457bd--6qht5-eth0" Nov 8 00:22:15.985505 containerd[1457]: 2025-11-08 00:22:15.978 [INFO][5474] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" HandleID="k8s-pod-network.56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Workload="localhost-k8s-whisker--68687457bd--6qht5-eth0" Nov 8 00:22:15.985505 containerd[1457]: 2025-11-08 00:22:15.979 [INFO][5474] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:15.985505 containerd[1457]: 2025-11-08 00:22:15.982 [INFO][5465] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe" Nov 8 00:22:15.985909 containerd[1457]: time="2025-11-08T00:22:15.985546730Z" level=info msg="TearDown network for sandbox \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\" successfully" Nov 8 00:22:15.989728 containerd[1457]: time="2025-11-08T00:22:15.989693148Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:22:15.989794 containerd[1457]: time="2025-11-08T00:22:15.989744897Z" level=info msg="RemovePodSandbox \"56857bb2b14496cb93d7c187183576f0ad6c8dd763aa53b98f7924e466a063fe\" returns successfully" Nov 8 00:22:16.188619 containerd[1457]: time="2025-11-08T00:22:16.188457892Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:16.191229 containerd[1457]: time="2025-11-08T00:22:16.191146960Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:22:16.191331 containerd[1457]: time="2025-11-08T00:22:16.191207195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:22:16.191435 kubelet[2518]: E1108 00:22:16.191389 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:22:16.191834 kubelet[2518]: E1108 00:22:16.191450 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:22:16.191834 kubelet[2518]: E1108 00:22:16.191607 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xtxfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vwhc4_calico-system(2c05d744-421f-40ca-8faf-61db719dbbcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:16.192790 kubelet[2518]: E1108 00:22:16.192756 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vwhc4" podUID="2c05d744-421f-40ca-8faf-61db719dbbcd" Nov 8 00:22:16.845445 containerd[1457]: time="2025-11-08T00:22:16.845376829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:22:17.229060 containerd[1457]: time="2025-11-08T00:22:17.228864360Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:17.276292 containerd[1457]: time="2025-11-08T00:22:17.276213648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:22:17.276451 containerd[1457]: time="2025-11-08T00:22:17.276247262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:22:17.276592 kubelet[2518]: E1108 00:22:17.276529 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:17.277068 kubelet[2518]: E1108 00:22:17.276599 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:17.277068 kubelet[2518]: E1108 00:22:17.276759 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fchkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b76c59587-pzwfg_calico-apiserver(0eac0b9a-9cb8-40ad-80af-819a17da25f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:17.277989 kubelet[2518]: E1108 00:22:17.277945 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b76c59587-pzwfg" podUID="0eac0b9a-9cb8-40ad-80af-819a17da25f0" Nov 8 00:22:18.941288 systemd[1]: Started sshd@13-10.0.0.49:22-10.0.0.1:59586.service - OpenSSH per-connection server daemon (10.0.0.1:59586). Nov 8 00:22:18.972977 sshd[5492]: Accepted publickey for core from 10.0.0.1 port 59586 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:18.974704 sshd[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:18.978689 systemd-logind[1447]: New session 14 of user core. Nov 8 00:22:18.984948 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:22:19.104859 sshd[5492]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:19.109109 systemd[1]: sshd@13-10.0.0.49:22-10.0.0.1:59586.service: Deactivated successfully. Nov 8 00:22:19.111507 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:22:19.112267 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:22:19.113242 systemd-logind[1447]: Removed session 14. Nov 8 00:22:23.201329 kubelet[2518]: E1108 00:22:23.201290 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:23.261876 systemd[1]: run-containerd-runc-k8s.io-c75487f53068293ea0737837b5896b7bd4004b566b1576055606cdd2b6899d22-runc.oe0kmQ.mount: Deactivated successfully. Nov 8 00:22:23.333689 kubelet[2518]: E1108 00:22:23.333627 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:23.844257 kubelet[2518]: E1108 00:22:23.844211 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b76c59587-szgkh" podUID="ad60f746-d64e-4394-bbcc-99e4406b9d56" Nov 8 00:22:24.119422 systemd[1]: Started sshd@14-10.0.0.49:22-10.0.0.1:54386.service - OpenSSH per-connection server daemon (10.0.0.1:54386). Nov 8 00:22:24.182430 sshd[5556]: Accepted publickey for core from 10.0.0.1 port 54386 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:24.184205 sshd[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:24.188604 systemd-logind[1447]: New session 15 of user core. Nov 8 00:22:24.192932 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:22:24.323529 sshd[5556]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:24.327718 systemd[1]: sshd@14-10.0.0.49:22-10.0.0.1:54386.service: Deactivated successfully. Nov 8 00:22:24.330040 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:22:24.330789 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:22:24.331925 systemd-logind[1447]: Removed session 15. Nov 8 00:22:25.845654 kubelet[2518]: E1108 00:22:25.845585 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84655bbcf4-42fp6" podUID="8afc460d-c7d5-4574-a129-acae64d116ee" Nov 8 00:22:26.845078 kubelet[2518]: E1108 00:22:26.844906 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86f57bbc6c-bq2jn" podUID="5ca7b27d-c4bf-4555-ac93-a9fa936c758c" Nov 8 00:22:26.845527 kubelet[2518]: E1108 00:22:26.845465 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-67lwd" podUID="6fb889d5-2903-4e6b-a458-6fb9eecb4dcd" Nov 8 00:22:28.845264 kubelet[2518]: E1108 00:22:28.845202 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vwhc4" podUID="2c05d744-421f-40ca-8faf-61db719dbbcd" Nov 8 00:22:28.845823 kubelet[2518]: E1108 00:22:28.845546 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b76c59587-pzwfg" podUID="0eac0b9a-9cb8-40ad-80af-819a17da25f0" Nov 8 00:22:29.339056 systemd[1]: Started sshd@15-10.0.0.49:22-10.0.0.1:54398.service - OpenSSH per-connection server daemon (10.0.0.1:54398). Nov 8 00:22:29.372271 sshd[5582]: Accepted publickey for core from 10.0.0.1 port 54398 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:29.374155 sshd[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:29.378671 systemd-logind[1447]: New session 16 of user core. Nov 8 00:22:29.395097 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:22:29.508064 sshd[5582]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:29.512776 systemd[1]: sshd@15-10.0.0.49:22-10.0.0.1:54398.service: Deactivated successfully. Nov 8 00:22:29.515287 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:22:29.515992 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:22:29.517336 systemd-logind[1447]: Removed session 16. Nov 8 00:22:34.519472 systemd[1]: Started sshd@16-10.0.0.49:22-10.0.0.1:60398.service - OpenSSH per-connection server daemon (10.0.0.1:60398). Nov 8 00:22:34.555931 sshd[5598]: Accepted publickey for core from 10.0.0.1 port 60398 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:34.557798 sshd[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:34.561910 systemd-logind[1447]: New session 17 of user core. Nov 8 00:22:34.570997 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:22:34.711064 sshd[5598]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:34.716739 systemd[1]: sshd@16-10.0.0.49:22-10.0.0.1:60398.service: Deactivated successfully. Nov 8 00:22:34.718899 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:22:34.719500 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:22:34.720404 systemd-logind[1447]: Removed session 17. Nov 8 00:22:37.844444 kubelet[2518]: E1108 00:22:37.844401 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:38.844872 containerd[1457]: time="2025-11-08T00:22:38.844772842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:22:39.205291 containerd[1457]: time="2025-11-08T00:22:39.205158285Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:39.206419 containerd[1457]: time="2025-11-08T00:22:39.206390776Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:22:39.206492 containerd[1457]: time="2025-11-08T00:22:39.206460738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:22:39.206614 kubelet[2518]: E1108 00:22:39.206570 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:22:39.206964 kubelet[2518]: E1108 00:22:39.206618 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:22:39.206964 kubelet[2518]: E1108 00:22:39.206855 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glmpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-67lwd_calico-system(6fb889d5-2903-4e6b-a458-6fb9eecb4dcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:39.207072 containerd[1457]: time="2025-11-08T00:22:39.206953504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:22:39.588638 containerd[1457]: time="2025-11-08T00:22:39.588584099Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:39.589915 containerd[1457]: time="2025-11-08T00:22:39.589877735Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:22:39.589995 containerd[1457]: time="2025-11-08T00:22:39.589918944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:22:39.590128 kubelet[2518]: E1108 00:22:39.590076 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:39.590187 kubelet[2518]: E1108 00:22:39.590140 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:39.590457 kubelet[2518]: E1108 00:22:39.590399 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4tz9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b76c59587-szgkh_calico-apiserver(ad60f746-d64e-4394-bbcc-99e4406b9d56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:39.590578 containerd[1457]: time="2025-11-08T00:22:39.590516458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:22:39.591909 kubelet[2518]: E1108 00:22:39.591869 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b76c59587-szgkh" podUID="ad60f746-d64e-4394-bbcc-99e4406b9d56" Nov 8 00:22:39.722889 systemd[1]: Started sshd@17-10.0.0.49:22-10.0.0.1:60412.service - OpenSSH per-connection server daemon (10.0.0.1:60412). Nov 8 00:22:39.765426 sshd[5618]: Accepted publickey for core from 10.0.0.1 port 60412 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:39.766872 sshd[5618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:39.771347 systemd-logind[1447]: New session 18 of user core. Nov 8 00:22:39.779928 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:22:39.901544 sshd[5618]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:39.912134 systemd[1]: sshd@17-10.0.0.49:22-10.0.0.1:60412.service: Deactivated successfully. Nov 8 00:22:39.914259 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:22:39.916005 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:22:39.921219 systemd[1]: Started sshd@18-10.0.0.49:22-10.0.0.1:60420.service - OpenSSH per-connection server daemon (10.0.0.1:60420). Nov 8 00:22:39.922313 systemd-logind[1447]: Removed session 18. Nov 8 00:22:39.939920 containerd[1457]: time="2025-11-08T00:22:39.939863270Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:39.940992 containerd[1457]: time="2025-11-08T00:22:39.940931959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:22:39.941056 containerd[1457]: time="2025-11-08T00:22:39.940988176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:22:39.941313 kubelet[2518]: E1108 00:22:39.941202 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:22:39.941313 kubelet[2518]: E1108 00:22:39.941281 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:22:39.941636 containerd[1457]: time="2025-11-08T00:22:39.941606460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:22:39.941690 kubelet[2518]: E1108 00:22:39.941607 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcjcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86f57bbc6c-bq2jn_calico-system(5ca7b27d-c4bf-4555-ac93-a9fa936c758c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:39.944844 kubelet[2518]: E1108 00:22:39.944776 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86f57bbc6c-bq2jn" podUID="5ca7b27d-c4bf-4555-ac93-a9fa936c758c" Nov 8 00:22:39.953782 sshd[5632]: Accepted publickey for core from 10.0.0.1 port 60420 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:39.955567 sshd[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:39.960171 systemd-logind[1447]: New session 19 of user core. Nov 8 00:22:39.968939 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:22:40.347286 containerd[1457]: time="2025-11-08T00:22:40.347246032Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:40.376336 containerd[1457]: time="2025-11-08T00:22:40.376272574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:22:40.376465 containerd[1457]: time="2025-11-08T00:22:40.376356694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:22:40.376521 kubelet[2518]: E1108 00:22:40.376476 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:22:40.376902 kubelet[2518]: E1108 00:22:40.376530 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:22:40.376902 kubelet[2518]: E1108 00:22:40.376742 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glmpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-67lwd_calico-system(6fb889d5-2903-4e6b-a458-6fb9eecb4dcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:40.377024 containerd[1457]: time="2025-11-08T00:22:40.376786761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:22:40.378270 kubelet[2518]: E1108 00:22:40.378217 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-67lwd" podUID="6fb889d5-2903-4e6b-a458-6fb9eecb4dcd" Nov 8 00:22:40.743727 sshd[5632]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:40.754882 systemd[1]: sshd@18-10.0.0.49:22-10.0.0.1:60420.service: Deactivated successfully. Nov 8 00:22:40.756701 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:22:40.758296 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:22:40.764225 systemd[1]: Started sshd@19-10.0.0.49:22-10.0.0.1:60434.service - OpenSSH per-connection server daemon (10.0.0.1:60434). Nov 8 00:22:40.765286 systemd-logind[1447]: Removed session 19. Nov 8 00:22:40.795613 sshd[5647]: Accepted publickey for core from 10.0.0.1 port 60434 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:40.797683 sshd[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:40.802321 systemd-logind[1447]: New session 20 of user core. Nov 8 00:22:40.809971 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:22:40.876610 containerd[1457]: time="2025-11-08T00:22:40.876538104Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:40.877849 containerd[1457]: time="2025-11-08T00:22:40.877793007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:22:40.877986 containerd[1457]: time="2025-11-08T00:22:40.877909548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:22:40.878128 kubelet[2518]: E1108 00:22:40.878077 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:22:40.878225 kubelet[2518]: E1108 00:22:40.878144 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:22:40.878941 kubelet[2518]: E1108 00:22:40.878405 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2875f70c47664434af66c35207c6af08,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cqdcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84655bbcf4-42fp6_calico-system(8afc460d-c7d5-4574-a129-acae64d116ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:40.879050 containerd[1457]: time="2025-11-08T00:22:40.878523594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:22:41.216578 containerd[1457]: time="2025-11-08T00:22:41.216524615Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:41.259120 containerd[1457]: time="2025-11-08T00:22:41.259064557Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:22:41.259264 containerd[1457]: time="2025-11-08T00:22:41.259129721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:22:41.259355 kubelet[2518]: E1108 00:22:41.259310 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:22:41.259577 kubelet[2518]: E1108 00:22:41.259372 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:22:41.259702 kubelet[2518]: E1108 00:22:41.259639 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xtxfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vwhc4_calico-system(2c05d744-421f-40ca-8faf-61db719dbbcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:41.260453 containerd[1457]: time="2025-11-08T00:22:41.260408508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:22:41.261975 kubelet[2518]: E1108 00:22:41.260833 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vwhc4" podUID="2c05d744-421f-40ca-8faf-61db719dbbcd" Nov 8 00:22:41.637468 sshd[5647]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:41.640969 containerd[1457]: time="2025-11-08T00:22:41.640911402Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:41.642279 containerd[1457]: time="2025-11-08T00:22:41.642207371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:22:41.642421 containerd[1457]: time="2025-11-08T00:22:41.642300508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:22:41.642528 kubelet[2518]: E1108 00:22:41.642481 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:41.648952 kubelet[2518]: E1108 00:22:41.642542 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:41.648952 kubelet[2518]: E1108 00:22:41.642746 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fchkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b76c59587-pzwfg_calico-apiserver(0eac0b9a-9cb8-40ad-80af-819a17da25f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:41.648952 kubelet[2518]: E1108 00:22:41.643854 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b76c59587-pzwfg" podUID="0eac0b9a-9cb8-40ad-80af-819a17da25f0" Nov 8 00:22:41.649123 containerd[1457]: time="2025-11-08T00:22:41.644031303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:22:41.655183 systemd[1]: Started sshd@20-10.0.0.49:22-10.0.0.1:60442.service - OpenSSH per-connection server daemon (10.0.0.1:60442). Nov 8 00:22:41.655763 systemd[1]: sshd@19-10.0.0.49:22-10.0.0.1:60434.service: Deactivated successfully. Nov 8 00:22:41.657658 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:22:41.659506 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:22:41.662329 systemd-logind[1447]: Removed session 20. Nov 8 00:22:41.689329 sshd[5669]: Accepted publickey for core from 10.0.0.1 port 60442 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:41.691404 sshd[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:41.695718 systemd-logind[1447]: New session 21 of user core. Nov 8 00:22:41.709951 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:22:41.966040 sshd[5669]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:41.974491 systemd[1]: sshd@20-10.0.0.49:22-10.0.0.1:60442.service: Deactivated successfully. Nov 8 00:22:41.976895 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:22:41.978661 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:22:41.986841 containerd[1457]: time="2025-11-08T00:22:41.986755190Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:41.987279 systemd[1]: Started sshd@21-10.0.0.49:22-10.0.0.1:60444.service - OpenSSH per-connection server daemon (10.0.0.1:60444). Nov 8 00:22:41.988333 systemd-logind[1447]: Removed session 21. Nov 8 00:22:42.012967 containerd[1457]: time="2025-11-08T00:22:42.012902126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:22:42.013084 containerd[1457]: time="2025-11-08T00:22:42.012952281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:22:42.013195 kubelet[2518]: E1108 00:22:42.013148 2518 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:22:42.013273 kubelet[2518]: E1108 00:22:42.013214 2518 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:22:42.013412 kubelet[2518]: E1108 00:22:42.013367 2518 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqdcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84655bbcf4-42fp6_calico-system(8afc460d-c7d5-4574-a129-acae64d116ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:42.014636 kubelet[2518]: E1108 00:22:42.014587 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84655bbcf4-42fp6" podUID="8afc460d-c7d5-4574-a129-acae64d116ee" Nov 8 00:22:42.015970 sshd[5684]: Accepted publickey for core from 10.0.0.1 port 60444 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:42.017551 sshd[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:42.021504 systemd-logind[1447]: New session 22 of user core. Nov 8 00:22:42.028935 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:22:42.238826 sshd[5684]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:42.242589 systemd[1]: sshd@21-10.0.0.49:22-10.0.0.1:60444.service: Deactivated successfully. Nov 8 00:22:42.245165 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:22:42.247328 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:22:42.248702 systemd-logind[1447]: Removed session 22. Nov 8 00:22:47.251725 systemd[1]: Started sshd@22-10.0.0.49:22-10.0.0.1:50110.service - OpenSSH per-connection server daemon (10.0.0.1:50110). Nov 8 00:22:47.284382 sshd[5698]: Accepted publickey for core from 10.0.0.1 port 50110 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:47.286128 sshd[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:47.290922 systemd-logind[1447]: New session 23 of user core. Nov 8 00:22:47.301258 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:22:47.414131 sshd[5698]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:47.418139 systemd[1]: sshd@22-10.0.0.49:22-10.0.0.1:50110.service: Deactivated successfully. Nov 8 00:22:47.420283 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:22:47.420897 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:22:47.421870 systemd-logind[1447]: Removed session 23. Nov 8 00:22:48.844247 kubelet[2518]: E1108 00:22:48.844161 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:49.844385 kubelet[2518]: E1108 00:22:49.844332 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:49.844868 kubelet[2518]: E1108 00:22:49.844435 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:22:50.845108 kubelet[2518]: E1108 00:22:50.845045 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b76c59587-szgkh" podUID="ad60f746-d64e-4394-bbcc-99e4406b9d56" Nov 8 00:22:51.844842 kubelet[2518]: E1108 00:22:51.844709 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vwhc4" podUID="2c05d744-421f-40ca-8faf-61db719dbbcd" Nov 8 00:22:52.430174 systemd[1]: Started sshd@23-10.0.0.49:22-10.0.0.1:50120.service - OpenSSH per-connection server daemon (10.0.0.1:50120). Nov 8 00:22:52.462390 sshd[5716]: Accepted publickey for core from 10.0.0.1 port 50120 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:52.464037 sshd[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:52.467889 systemd-logind[1447]: New session 24 of user core. Nov 8 00:22:52.476949 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:22:52.589795 sshd[5716]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:52.593893 systemd[1]: sshd@23-10.0.0.49:22-10.0.0.1:50120.service: Deactivated successfully. Nov 8 00:22:52.596214 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:22:52.597278 systemd-logind[1447]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:22:52.598435 systemd-logind[1447]: Removed session 24. Nov 8 00:22:52.845555 kubelet[2518]: E1108 00:22:52.845501 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-67lwd" podUID="6fb889d5-2903-4e6b-a458-6fb9eecb4dcd" Nov 8 00:22:53.352399 systemd[1]: run-containerd-runc-k8s.io-c75487f53068293ea0737837b5896b7bd4004b566b1576055606cdd2b6899d22-runc.95jpeE.mount: Deactivated successfully. Nov 8 00:22:53.844350 kubelet[2518]: E1108 00:22:53.844296 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86f57bbc6c-bq2jn" podUID="5ca7b27d-c4bf-4555-ac93-a9fa936c758c" Nov 8 00:22:56.845611 kubelet[2518]: E1108 00:22:56.845549 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b76c59587-pzwfg" podUID="0eac0b9a-9cb8-40ad-80af-819a17da25f0" Nov 8 00:22:56.848892 kubelet[2518]: E1108 00:22:56.848799 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84655bbcf4-42fp6" podUID="8afc460d-c7d5-4574-a129-acae64d116ee" Nov 8 00:22:57.603299 systemd[1]: Started sshd@24-10.0.0.49:22-10.0.0.1:37032.service - OpenSSH per-connection server daemon (10.0.0.1:37032). Nov 8 00:22:57.658496 sshd[5753]: Accepted publickey for core from 10.0.0.1 port 37032 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:22:57.660854 sshd[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:57.666971 systemd-logind[1447]: New session 25 of user core. Nov 8 00:22:57.672063 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:22:57.849601 sshd[5753]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:57.853843 systemd[1]: sshd@24-10.0.0.49:22-10.0.0.1:37032.service: Deactivated successfully. Nov 8 00:22:57.857129 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:22:57.858956 systemd-logind[1447]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:22:57.860106 systemd-logind[1447]: Removed session 25. Nov 8 00:22:58.075184 update_engine[1449]: I20251108 00:22:58.075088 1449 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 8 00:22:58.075184 update_engine[1449]: I20251108 00:22:58.075162 1449 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 8 00:22:58.076539 update_engine[1449]: I20251108 00:22:58.076484 1449 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 8 00:22:58.077085 update_engine[1449]: I20251108 00:22:58.077049 1449 omaha_request_params.cc:62] Current group set to lts Nov 8 00:22:58.077232 update_engine[1449]: I20251108 00:22:58.077196 1449 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 8 00:22:58.077232 update_engine[1449]: I20251108 00:22:58.077211 1449 update_attempter.cc:643] Scheduling an action processor start. Nov 8 00:22:58.077232 update_engine[1449]: I20251108 00:22:58.077232 1449 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 8 00:22:58.077321 update_engine[1449]: I20251108 00:22:58.077272 1449 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 8 00:22:58.077355 update_engine[1449]: I20251108 00:22:58.077335 1449 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 8 00:22:58.077355 update_engine[1449]: I20251108 00:22:58.077347 1449 omaha_request_action.cc:272] Request: Nov 8 00:22:58.077355 update_engine[1449]: Nov 8 00:22:58.077355 update_engine[1449]: Nov 8 00:22:58.077355 update_engine[1449]: Nov 8 00:22:58.077355 update_engine[1449]: Nov 8 00:22:58.077355 update_engine[1449]: Nov 8 00:22:58.077355 update_engine[1449]: Nov 8 00:22:58.077355 update_engine[1449]: Nov 8 00:22:58.077355 update_engine[1449]: Nov 8 00:22:58.077554 update_engine[1449]: I20251108 00:22:58.077355 1449 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 8 00:22:58.082722 update_engine[1449]: I20251108 00:22:58.081308 1449 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 8 00:22:58.082722 update_engine[1449]: I20251108 00:22:58.081619 1449 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 8 00:22:58.084428 locksmithd[1476]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 8 00:22:58.088452 update_engine[1449]: E20251108 00:22:58.088398 1449 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 8 00:22:58.088583 update_engine[1449]: I20251108 00:22:58.088491 1449 libcurl_http_fetcher.cc:283] No HTTP response, retry 1