Nov 1 00:21:00.062716 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:21:00.062788 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:21:00.062817 kernel: BIOS-provided physical RAM map: Nov 1 00:21:00.062828 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 00:21:00.062840 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 00:21:00.062852 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 00:21:00.062866 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 1 00:21:00.062878 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 1 00:21:00.062889 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:21:00.062910 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 00:21:00.062927 kernel: NX (Execute Disable) protection: active Nov 1 00:21:00.062940 kernel: APIC: Static calls initialized Nov 1 00:21:00.062968 kernel: SMBIOS 2.8 present. Nov 1 00:21:00.062983 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 1 00:21:00.063001 kernel: Hypervisor detected: KVM Nov 1 00:21:00.063022 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:21:00.063040 kernel: kvm-clock: using sched offset of 3449440856 cycles Nov 1 00:21:00.063103 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:21:00.063120 kernel: tsc: Detected 2494.138 MHz processor Nov 1 00:21:00.063135 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:21:00.063152 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:21:00.063165 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 1 00:21:00.063178 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 1 00:21:00.063192 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:21:00.063214 kernel: ACPI: Early table checksum verification disabled Nov 1 00:21:00.063226 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 1 00:21:00.063240 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:21:00.063252 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:21:00.063268 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:21:00.063281 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 1 00:21:00.063292 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:21:00.063305 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:21:00.063318 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:21:00.063340 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:21:00.063356 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 1 00:21:00.063389 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 1 00:21:00.063933 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 1 00:21:00.063960 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 1 00:21:00.063976 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 1 00:21:00.063993 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 1 00:21:00.064028 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 1 00:21:00.064042 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 00:21:00.064055 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 00:21:00.064072 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 1 00:21:00.064113 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 1 00:21:00.064141 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Nov 1 00:21:00.064160 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Nov 1 00:21:00.064182 kernel: Zone ranges: Nov 1 00:21:00.064198 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:21:00.064212 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 1 00:21:00.064226 kernel: Normal empty Nov 1 00:21:00.064240 kernel: Movable zone start for each node Nov 1 00:21:00.064254 kernel: Early memory node ranges Nov 1 00:21:00.064268 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 00:21:00.064283 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 1 00:21:00.064296 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 1 00:21:00.064318 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:21:00.064331 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 00:21:00.064352 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 1 00:21:00.064366 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:21:00.064403 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:21:00.064418 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:21:00.064431 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:21:00.064444 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:21:00.064460 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:21:00.064483 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:21:00.064496 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:21:00.064511 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:21:00.064526 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:21:00.064543 kernel: TSC deadline timer available Nov 1 00:21:00.064557 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:21:00.064572 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 00:21:00.064586 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 1 00:21:00.064606 kernel: Booting paravirtualized kernel on KVM Nov 1 00:21:00.064623 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:21:00.064644 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:21:00.064659 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 1 00:21:00.064674 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 1 00:21:00.064688 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:21:00.064701 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 1 00:21:00.064720 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:21:00.064735 kernel: random: crng init done Nov 1 00:21:00.064748 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:21:00.064770 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:21:00.064783 kernel: Fallback order for Node 0: 0 Nov 1 00:21:00.064797 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Nov 1 00:21:00.064813 kernel: Policy zone: DMA32 Nov 1 00:21:00.064829 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:21:00.064846 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 125148K reserved, 0K cma-reserved) Nov 1 00:21:00.064860 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:21:00.064877 kernel: Kernel/User page tables isolation: enabled Nov 1 00:21:00.064893 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:21:00.064917 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:21:00.064932 kernel: Dynamic Preempt: voluntary Nov 1 00:21:00.064947 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:21:00.064964 kernel: rcu: RCU event tracing is enabled. Nov 1 00:21:00.064979 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:21:00.064994 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:21:00.065009 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:21:00.065022 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:21:00.065036 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:21:00.065055 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:21:00.065069 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 1 00:21:00.065083 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:21:00.065098 kernel: Console: colour VGA+ 80x25 Nov 1 00:21:00.065120 kernel: printk: console [tty0] enabled Nov 1 00:21:00.065135 kernel: printk: console [ttyS0] enabled Nov 1 00:21:00.065151 kernel: ACPI: Core revision 20230628 Nov 1 00:21:00.065166 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:21:00.065182 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:21:00.065202 kernel: x2apic enabled Nov 1 00:21:00.065219 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:21:00.065235 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:21:00.065251 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Nov 1 00:21:00.065265 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Nov 1 00:21:00.065280 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 1 00:21:00.065296 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 1 00:21:00.065313 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:21:00.065355 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:21:00.067078 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:21:00.067132 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 1 00:21:00.067150 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:21:00.067181 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:21:00.067195 kernel: MDS: Mitigation: Clear CPU buffers Nov 1 00:21:00.067209 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:21:00.067224 kernel: active return thunk: its_return_thunk Nov 1 00:21:00.067243 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:21:00.067274 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:21:00.067290 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:21:00.067308 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:21:00.067325 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:21:00.067342 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 1 00:21:00.067358 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:21:00.067393 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:21:00.067430 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:21:00.067454 kernel: landlock: Up and running. Nov 1 00:21:00.067472 kernel: SELinux: Initializing. Nov 1 00:21:00.067486 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:21:00.067501 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:21:00.067517 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 1 00:21:00.067533 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:21:00.067548 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:21:00.067562 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:21:00.067579 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 1 00:21:00.067600 kernel: signal: max sigframe size: 1776 Nov 1 00:21:00.067614 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:21:00.067631 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:21:00.067645 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 00:21:00.067660 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:21:00.067674 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:21:00.067689 kernel: .... node #0, CPUs: #1 Nov 1 00:21:00.067703 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:21:00.067725 kernel: smpboot: Max logical packages: 1 Nov 1 00:21:00.067744 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Nov 1 00:21:00.067759 kernel: devtmpfs: initialized Nov 1 00:21:00.067776 kernel: x86/mm: Memory block size: 128MB Nov 1 00:21:00.067791 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:21:00.067809 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:21:00.067827 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:21:00.067846 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:21:00.067861 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:21:00.067877 kernel: audit: type=2000 audit(1761956458.280:1): state=initialized audit_enabled=0 res=1 Nov 1 00:21:00.067899 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:21:00.067913 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:21:00.067927 kernel: cpuidle: using governor menu Nov 1 00:21:00.067941 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:21:00.067956 kernel: dca service started, version 1.12.1 Nov 1 00:21:00.067975 kernel: PCI: Using configuration type 1 for base access Nov 1 00:21:00.067992 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:21:00.068008 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:21:00.068023 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:21:00.068045 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:21:00.068064 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:21:00.068080 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:21:00.068097 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:21:00.068115 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:21:00.068130 kernel: ACPI: Interpreter enabled Nov 1 00:21:00.068145 kernel: ACPI: PM: (supports S0 S5) Nov 1 00:21:00.068160 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:21:00.068176 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:21:00.068197 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 00:21:00.068212 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 1 00:21:00.068226 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:21:00.070888 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:21:00.071173 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 1 00:21:00.071418 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 1 00:21:00.071448 kernel: acpiphp: Slot [3] registered Nov 1 00:21:00.071479 kernel: acpiphp: Slot [4] registered Nov 1 00:21:00.071493 kernel: acpiphp: Slot [5] registered Nov 1 00:21:00.071507 kernel: acpiphp: Slot [6] registered Nov 1 00:21:00.071521 kernel: acpiphp: Slot [7] registered Nov 1 00:21:00.071537 kernel: acpiphp: Slot [8] registered Nov 1 00:21:00.071554 kernel: acpiphp: Slot [9] registered Nov 1 00:21:00.071569 kernel: acpiphp: Slot [10] registered Nov 1 00:21:00.071587 kernel: acpiphp: Slot [11] registered Nov 1 00:21:00.071604 kernel: acpiphp: Slot [12] registered Nov 1 00:21:00.071621 kernel: acpiphp: Slot [13] registered Nov 1 00:21:00.071646 kernel: acpiphp: Slot [14] registered Nov 1 00:21:00.071661 kernel: acpiphp: Slot [15] registered Nov 1 00:21:00.071676 kernel: acpiphp: Slot [16] registered Nov 1 00:21:00.071693 kernel: acpiphp: Slot [17] registered Nov 1 00:21:00.071711 kernel: acpiphp: Slot [18] registered Nov 1 00:21:00.071728 kernel: acpiphp: Slot [19] registered Nov 1 00:21:00.071746 kernel: acpiphp: Slot [20] registered Nov 1 00:21:00.071762 kernel: acpiphp: Slot [21] registered Nov 1 00:21:00.071777 kernel: acpiphp: Slot [22] registered Nov 1 00:21:00.071800 kernel: acpiphp: Slot [23] registered Nov 1 00:21:00.071814 kernel: acpiphp: Slot [24] registered Nov 1 00:21:00.071829 kernel: acpiphp: Slot [25] registered Nov 1 00:21:00.071846 kernel: acpiphp: Slot [26] registered Nov 1 00:21:00.071861 kernel: acpiphp: Slot [27] registered Nov 1 00:21:00.071879 kernel: acpiphp: Slot [28] registered Nov 1 00:21:00.071895 kernel: acpiphp: Slot [29] registered Nov 1 00:21:00.071925 kernel: acpiphp: Slot [30] registered Nov 1 00:21:00.071941 kernel: acpiphp: Slot [31] registered Nov 1 00:21:00.071956 kernel: PCI host bridge to bus 0000:00 Nov 1 00:21:00.072246 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:21:00.072475 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:21:00.072649 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:21:00.072824 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 1 00:21:00.073003 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 1 00:21:00.077679 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:21:00.078075 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 1 00:21:00.078330 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 1 00:21:00.078623 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Nov 1 00:21:00.078830 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Nov 1 00:21:00.079033 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 1 00:21:00.079371 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 1 00:21:00.079601 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 1 00:21:00.079822 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 1 00:21:00.080069 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Nov 1 00:21:00.080238 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Nov 1 00:21:00.080502 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 1 00:21:00.080682 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 1 00:21:00.080855 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 1 00:21:00.081088 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Nov 1 00:21:00.081277 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Nov 1 00:21:00.083695 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Nov 1 00:21:00.083960 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Nov 1 00:21:00.084144 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Nov 1 00:21:00.084318 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:21:00.086957 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:21:00.087244 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Nov 1 00:21:00.087823 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Nov 1 00:21:00.088056 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Nov 1 00:21:00.088310 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:21:00.088544 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Nov 1 00:21:00.088731 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Nov 1 00:21:00.088923 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 1 00:21:00.089143 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Nov 1 00:21:00.089345 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Nov 1 00:21:00.091789 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Nov 1 00:21:00.092037 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 1 00:21:00.092265 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:21:00.092504 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Nov 1 00:21:00.092695 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Nov 1 00:21:00.092898 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Nov 1 00:21:00.093114 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:21:00.093302 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Nov 1 00:21:00.095764 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Nov 1 00:21:00.095990 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Nov 1 00:21:00.096219 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Nov 1 00:21:00.096576 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Nov 1 00:21:00.096812 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 1 00:21:00.096837 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:21:00.096852 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:21:00.096868 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:21:00.096885 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:21:00.096905 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 1 00:21:00.096922 kernel: iommu: Default domain type: Translated Nov 1 00:21:00.096951 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:21:00.096969 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:21:00.096987 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:21:00.097004 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 00:21:00.097022 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 1 00:21:00.097225 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 1 00:21:00.105839 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 1 00:21:00.106180 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:21:00.106211 kernel: vgaarb: loaded Nov 1 00:21:00.106252 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:21:00.106272 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:21:00.106291 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:21:00.106307 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:21:00.106324 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:21:00.106339 kernel: pnp: PnP ACPI init Nov 1 00:21:00.106353 kernel: pnp: PnP ACPI: found 4 devices Nov 1 00:21:00.106401 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:21:00.106417 kernel: NET: Registered PF_INET protocol family Nov 1 00:21:00.106439 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:21:00.106455 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 1 00:21:00.106474 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:21:00.106492 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:21:00.106509 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 00:21:00.106524 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 1 00:21:00.106541 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:21:00.106560 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:21:00.106584 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:21:00.106598 kernel: NET: Registered PF_XDP protocol family Nov 1 00:21:00.106860 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:21:00.107028 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:21:00.107181 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:21:00.107337 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 1 00:21:00.107511 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 1 00:21:00.107705 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 1 00:21:00.107904 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 1 00:21:00.107953 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 1 00:21:00.108135 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 31189 usecs Nov 1 00:21:00.108160 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:21:00.108176 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 00:21:00.108192 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Nov 1 00:21:00.108207 kernel: Initialise system trusted keyrings Nov 1 00:21:00.108222 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 1 00:21:00.108237 kernel: Key type asymmetric registered Nov 1 00:21:00.108263 kernel: Asymmetric key parser 'x509' registered Nov 1 00:21:00.108277 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:21:00.108291 kernel: io scheduler mq-deadline registered Nov 1 00:21:00.108306 kernel: io scheduler kyber registered Nov 1 00:21:00.108321 kernel: io scheduler bfq registered Nov 1 00:21:00.108336 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:21:00.108351 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 1 00:21:00.108367 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 1 00:21:00.108405 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 1 00:21:00.108427 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:21:00.108443 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:21:00.108458 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:21:00.108474 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:21:00.108489 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:21:00.108504 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:21:00.108761 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 1 00:21:00.108934 kernel: rtc_cmos 00:03: registered as rtc0 Nov 1 00:21:00.109108 kernel: rtc_cmos 00:03: setting system clock to 2025-11-01T00:20:59 UTC (1761956459) Nov 1 00:21:00.109278 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 1 00:21:00.109302 kernel: intel_pstate: CPU model not supported Nov 1 00:21:00.109321 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:21:00.109339 kernel: Segment Routing with IPv6 Nov 1 00:21:00.109357 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:21:00.109408 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:21:00.109428 kernel: Key type dns_resolver registered Nov 1 00:21:00.109449 kernel: IPI shorthand broadcast: enabled Nov 1 00:21:00.109479 kernel: sched_clock: Marking stable (1047008915, 147300970)->(1328427671, -134117786) Nov 1 00:21:00.109496 kernel: registered taskstats version 1 Nov 1 00:21:00.109515 kernel: Loading compiled-in X.509 certificates Nov 1 00:21:00.109534 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:21:00.109550 kernel: Key type .fscrypt registered Nov 1 00:21:00.109568 kernel: Key type fscrypt-provisioning registered Nov 1 00:21:00.109585 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:21:00.109603 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:21:00.109619 kernel: ima: No architecture policies found Nov 1 00:21:00.109641 kernel: clk: Disabling unused clocks Nov 1 00:21:00.109655 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:21:00.109670 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:21:00.109685 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:21:00.109751 kernel: Run /init as init process Nov 1 00:21:00.109780 kernel: with arguments: Nov 1 00:21:00.109798 kernel: /init Nov 1 00:21:00.109816 kernel: with environment: Nov 1 00:21:00.109834 kernel: HOME=/ Nov 1 00:21:00.109861 kernel: TERM=linux Nov 1 00:21:00.109885 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:21:00.109909 systemd[1]: Detected virtualization kvm. Nov 1 00:21:00.109930 systemd[1]: Detected architecture x86-64. Nov 1 00:21:00.109951 systemd[1]: Running in initrd. Nov 1 00:21:00.109970 systemd[1]: No hostname configured, using default hostname. Nov 1 00:21:00.109991 systemd[1]: Hostname set to . Nov 1 00:21:00.110020 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:21:00.110041 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:21:00.110061 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:21:00.110081 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:21:00.110104 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:21:00.110124 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:21:00.110145 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:21:00.110165 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:21:00.110198 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:21:00.110219 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:21:00.110240 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:21:00.110260 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:21:00.110281 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:21:00.110302 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:21:00.110324 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:21:00.110352 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:21:00.110398 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:21:00.110419 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:21:00.110441 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:21:00.110462 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:21:00.110484 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:21:00.110514 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:21:00.110535 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:21:00.110556 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:21:00.110578 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:21:00.110598 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:21:00.110620 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:21:00.110642 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:21:00.110663 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:21:00.110703 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:21:00.110719 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:00.110734 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:21:00.110755 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:21:00.110777 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:21:00.110799 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:21:00.110886 systemd-journald[184]: Collecting audit messages is disabled. Nov 1 00:21:00.110937 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:21:00.110967 systemd-journald[184]: Journal started Nov 1 00:21:00.111010 systemd-journald[184]: Runtime Journal (/run/log/journal/7b417e96c9b845a1aba6a2cd06f18085) is 4.9M, max 39.3M, 34.4M free. Nov 1 00:21:00.060740 systemd-modules-load[185]: Inserted module 'overlay' Nov 1 00:21:00.136832 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:21:00.136974 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:21:00.138802 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:00.141143 kernel: Bridge firewalling registered Nov 1 00:21:00.140448 systemd-modules-load[185]: Inserted module 'br_netfilter' Nov 1 00:21:00.148203 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:21:00.158887 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:21:00.162728 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:21:00.165739 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:21:00.170136 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:21:00.213737 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:21:00.215176 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:21:00.217355 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:21:00.221387 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:21:00.229880 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:21:00.238216 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:21:00.257283 dracut-cmdline[219]: dracut-dracut-053 Nov 1 00:21:00.264514 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:21:00.314825 systemd-resolved[221]: Positive Trust Anchors: Nov 1 00:21:00.314861 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:21:00.314933 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:21:00.322166 systemd-resolved[221]: Defaulting to hostname 'linux'. Nov 1 00:21:00.324783 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:21:00.326919 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:21:00.423436 kernel: SCSI subsystem initialized Nov 1 00:21:00.438440 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:21:00.453435 kernel: iscsi: registered transport (tcp) Nov 1 00:21:00.485688 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:21:00.485830 kernel: QLogic iSCSI HBA Driver Nov 1 00:21:00.559079 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:21:00.566903 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:21:00.613625 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:21:00.613754 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:21:00.614428 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:21:00.667493 kernel: raid6: avx2x4 gen() 15016 MB/s Nov 1 00:21:00.684454 kernel: raid6: avx2x2 gen() 15719 MB/s Nov 1 00:21:00.701639 kernel: raid6: avx2x1 gen() 12101 MB/s Nov 1 00:21:00.701740 kernel: raid6: using algorithm avx2x2 gen() 15719 MB/s Nov 1 00:21:00.721474 kernel: raid6: .... xor() 18854 MB/s, rmw enabled Nov 1 00:21:00.721588 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:21:00.746428 kernel: xor: automatically using best checksumming function avx Nov 1 00:21:00.934417 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:21:00.953429 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:21:00.960721 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:21:00.987091 systemd-udevd[404]: Using default interface naming scheme 'v255'. Nov 1 00:21:00.993069 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:21:01.004292 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:21:01.025169 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Nov 1 00:21:01.075463 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:21:01.082812 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:21:01.198118 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:21:01.205680 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:21:01.247227 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:21:01.251921 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:21:01.254552 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:21:01.256331 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:21:01.267738 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:21:01.299006 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:21:01.307538 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 1 00:21:01.321268 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 1 00:21:01.345509 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:21:01.345623 kernel: GPT:9289727 != 125829119 Nov 1 00:21:01.345645 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:21:01.345697 kernel: GPT:9289727 != 125829119 Nov 1 00:21:01.345724 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:21:01.347749 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:21:01.370952 kernel: scsi host0: Virtio SCSI HBA Nov 1 00:21:01.374466 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 1 00:21:01.378444 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 1 00:21:01.393407 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:21:01.399407 kernel: libata version 3.00 loaded. Nov 1 00:21:01.419678 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 1 00:21:01.425419 kernel: scsi host1: ata_piix Nov 1 00:21:01.438967 kernel: scsi host2: ata_piix Nov 1 00:21:01.439456 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Nov 1 00:21:01.439493 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Nov 1 00:21:01.444251 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:21:01.446301 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:21:01.446353 kernel: AES CTR mode by8 optimization enabled Nov 1 00:21:01.445678 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:21:01.448570 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:21:01.450459 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:21:01.450840 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:01.453900 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:01.464551 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:01.473404 kernel: ACPI: bus type USB registered Nov 1 00:21:01.486054 kernel: usbcore: registered new interface driver usbfs Nov 1 00:21:01.486139 kernel: usbcore: registered new interface driver hub Nov 1 00:21:01.490418 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (463) Nov 1 00:21:01.494405 kernel: usbcore: registered new device driver usb Nov 1 00:21:01.499436 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (451) Nov 1 00:21:01.551779 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 1 00:21:01.568169 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 1 00:21:01.635543 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 00:21:01.638040 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:01.645807 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 1 00:21:01.646653 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 1 00:21:01.660852 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 1 00:21:01.661224 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 1 00:21:01.662183 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 1 00:21:01.662351 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 1 00:21:01.662588 kernel: hub 1-0:1.0: USB hub found Nov 1 00:21:01.662908 kernel: hub 1-0:1.0: 2 ports detected Nov 1 00:21:01.661815 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:21:01.672702 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:21:01.693034 disk-uuid[543]: Primary Header is updated. Nov 1 00:21:01.693034 disk-uuid[543]: Secondary Entries is updated. Nov 1 00:21:01.693034 disk-uuid[543]: Secondary Header is updated. Nov 1 00:21:01.695206 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:21:01.722835 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:21:02.717421 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:21:02.717869 disk-uuid[548]: The operation has completed successfully. Nov 1 00:21:02.794247 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:21:02.794427 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:21:02.807780 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:21:02.812540 sh[565]: Success Nov 1 00:21:02.836140 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:21:02.913039 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:21:02.922526 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:21:02.925860 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:21:02.963056 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:21:02.963172 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:21:02.963196 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:21:02.966058 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:21:02.966188 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:21:02.978047 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:21:02.979564 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:21:02.994887 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:21:02.998760 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:21:03.015675 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:21:03.015772 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:21:03.015791 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:21:03.022405 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:21:03.036670 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:21:03.040377 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:21:03.047670 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:21:03.054975 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:21:03.182232 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:21:03.190918 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:21:03.223441 ignition[646]: Ignition 2.19.0 Nov 1 00:21:03.223456 ignition[646]: Stage: fetch-offline Nov 1 00:21:03.223508 ignition[646]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:03.223530 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:21:03.223665 ignition[646]: parsed url from cmdline: "" Nov 1 00:21:03.223670 ignition[646]: no config URL provided Nov 1 00:21:03.223675 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:21:03.223684 ignition[646]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:21:03.223690 ignition[646]: failed to fetch config: resource requires networking Nov 1 00:21:03.223937 ignition[646]: Ignition finished successfully Nov 1 00:21:03.229493 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:21:03.240905 systemd-networkd[749]: lo: Link UP Nov 1 00:21:03.240919 systemd-networkd[749]: lo: Gained carrier Nov 1 00:21:03.243774 systemd-networkd[749]: Enumeration completed Nov 1 00:21:03.244227 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 1 00:21:03.244231 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 1 00:21:03.244562 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:21:03.245270 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:21:03.245275 systemd-networkd[749]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:21:03.246539 systemd-networkd[749]: eth0: Link UP Nov 1 00:21:03.246546 systemd-networkd[749]: eth0: Gained carrier Nov 1 00:21:03.246560 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 1 00:21:03.248966 systemd[1]: Reached target network.target - Network. Nov 1 00:21:03.253022 systemd-networkd[749]: eth1: Link UP Nov 1 00:21:03.253029 systemd-networkd[749]: eth1: Gained carrier Nov 1 00:21:03.253050 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:21:03.256852 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 1 00:21:03.272519 systemd-networkd[749]: eth0: DHCPv4 address 165.232.144.31/20, gateway 165.232.144.1 acquired from 169.254.169.253 Nov 1 00:21:03.276512 systemd-networkd[749]: eth1: DHCPv4 address 10.124.0.30/20 acquired from 169.254.169.253 Nov 1 00:21:03.297754 ignition[756]: Ignition 2.19.0 Nov 1 00:21:03.297773 ignition[756]: Stage: fetch Nov 1 00:21:03.298090 ignition[756]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:03.298108 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:21:03.298292 ignition[756]: parsed url from cmdline: "" Nov 1 00:21:03.298300 ignition[756]: no config URL provided Nov 1 00:21:03.298310 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:21:03.298327 ignition[756]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:21:03.298360 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 1 00:21:03.314654 ignition[756]: GET result: OK Nov 1 00:21:03.314902 ignition[756]: parsing config with SHA512: 01d32d95423cc077b0a9d05c132dfe75136035bcb19bfb1bd0dd9cbe435f82d26a1311f4ef3b738d7d7b3662c98d2d1d17cd83b9cdacd8dc95037a94d9e999e2 Nov 1 00:21:03.320099 unknown[756]: fetched base config from "system" Nov 1 00:21:03.320114 unknown[756]: fetched base config from "system" Nov 1 00:21:03.320683 ignition[756]: fetch: fetch complete Nov 1 00:21:03.320122 unknown[756]: fetched user config from "digitalocean" Nov 1 00:21:03.320689 ignition[756]: fetch: fetch passed Nov 1 00:21:03.320756 ignition[756]: Ignition finished successfully Nov 1 00:21:03.323519 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 1 00:21:03.329705 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:21:03.355403 ignition[764]: Ignition 2.19.0 Nov 1 00:21:03.355415 ignition[764]: Stage: kargs Nov 1 00:21:03.355697 ignition[764]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:03.355713 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:21:03.358757 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:21:03.357006 ignition[764]: kargs: kargs passed Nov 1 00:21:03.357072 ignition[764]: Ignition finished successfully Nov 1 00:21:03.371736 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:21:03.390990 ignition[770]: Ignition 2.19.0 Nov 1 00:21:03.391003 ignition[770]: Stage: disks Nov 1 00:21:03.391266 ignition[770]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:03.393778 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:21:03.391286 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:21:03.392321 ignition[770]: disks: disks passed Nov 1 00:21:03.395177 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:21:03.392416 ignition[770]: Ignition finished successfully Nov 1 00:21:03.400725 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:21:03.401676 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:21:03.402881 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:21:03.403876 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:21:03.417766 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:21:03.438634 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 00:21:03.445167 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:21:03.452576 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:21:03.561643 kernel: EXT4-fs (vda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:21:03.562192 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:21:03.563579 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:21:03.576587 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:21:03.579533 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:21:03.584717 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Nov 1 00:21:03.591546 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 1 00:21:03.604036 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (786) Nov 1 00:21:03.604088 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:21:03.604103 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:21:03.604116 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:21:03.602851 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:21:03.607145 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:21:03.602907 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:21:03.604981 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:21:03.614119 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:21:03.618815 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:21:03.700739 coreos-metadata[788]: Nov 01 00:21:03.700 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:21:03.707544 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:21:03.715596 coreos-metadata[788]: Nov 01 00:21:03.715 INFO Fetch successful Nov 1 00:21:03.716751 coreos-metadata[789]: Nov 01 00:21:03.716 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:21:03.722924 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:21:03.725199 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Nov 1 00:21:03.725343 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Nov 1 00:21:03.730523 coreos-metadata[789]: Nov 01 00:21:03.730 INFO Fetch successful Nov 1 00:21:03.733453 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:21:03.740659 coreos-metadata[789]: Nov 01 00:21:03.740 INFO wrote hostname ci-4081.3.6-n-f16f13e513 to /sysroot/etc/hostname Nov 1 00:21:03.744302 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 00:21:03.745922 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:21:03.899971 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:21:03.907595 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:21:03.922994 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:21:03.935420 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:21:03.960616 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:21:03.967808 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:21:03.980336 ignition[908]: INFO : Ignition 2.19.0 Nov 1 00:21:03.980336 ignition[908]: INFO : Stage: mount Nov 1 00:21:03.982135 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:03.982135 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:21:03.982135 ignition[908]: INFO : mount: mount passed Nov 1 00:21:03.982135 ignition[908]: INFO : Ignition finished successfully Nov 1 00:21:03.984955 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:21:03.996195 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:21:04.015683 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:21:04.028435 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (919) Nov 1 00:21:04.033227 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:21:04.033347 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:21:04.033368 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:21:04.039425 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:21:04.042466 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:21:04.076819 ignition[936]: INFO : Ignition 2.19.0 Nov 1 00:21:04.076819 ignition[936]: INFO : Stage: files Nov 1 00:21:04.079017 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:04.079017 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:21:04.079017 ignition[936]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:21:04.082246 ignition[936]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:21:04.082246 ignition[936]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:21:04.084613 ignition[936]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:21:04.085723 ignition[936]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:21:04.085723 ignition[936]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:21:04.085211 unknown[936]: wrote ssh authorized keys file for user: core Nov 1 00:21:04.089557 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:21:04.089557 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:21:04.089557 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:21:04.089557 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:21:04.198432 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:21:04.272477 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:21:04.273875 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:21:04.273875 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:21:04.273875 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:21:04.273875 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:21:04.273875 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:21:04.273875 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:21:04.273875 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:21:04.273875 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:21:04.283436 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:21:04.283436 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:21:04.283436 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:21:04.283436 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:21:04.283436 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:21:04.283436 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:21:04.556887 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 1 00:21:04.629806 systemd-networkd[749]: eth1: Gained IPv6LL Nov 1 00:21:04.693583 systemd-networkd[749]: eth0: Gained IPv6LL Nov 1 00:21:04.991325 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:21:04.992639 ignition[936]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 1 00:21:04.993633 ignition[936]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:21:04.994563 ignition[936]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:21:04.994563 ignition[936]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 1 00:21:04.994563 ignition[936]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 1 00:21:04.994563 ignition[936]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:21:04.994563 ignition[936]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:21:04.994563 ignition[936]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 1 00:21:04.994563 ignition[936]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:21:04.994563 ignition[936]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:21:04.994563 ignition[936]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:21:05.003131 ignition[936]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:21:05.003131 ignition[936]: INFO : files: files passed Nov 1 00:21:05.003131 ignition[936]: INFO : Ignition finished successfully Nov 1 00:21:04.996292 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:21:05.006582 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:21:05.010498 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:21:05.014319 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:21:05.015324 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:21:05.045409 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:21:05.045409 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:21:05.047977 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:21:05.050300 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:21:05.052305 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:21:05.064766 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:21:05.105209 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:21:05.105461 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:21:05.107250 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:21:05.108321 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:21:05.109729 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:21:05.121917 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:21:05.144862 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:21:05.152803 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:21:05.179755 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:21:05.180693 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:21:05.182121 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:21:05.183323 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:21:05.183562 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:21:05.185044 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:21:05.186557 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:21:05.187665 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:21:05.188771 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:21:05.190003 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:21:05.191221 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:21:05.192443 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:21:05.193892 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:21:05.195173 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:21:05.196498 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:21:05.197574 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:21:05.197864 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:21:05.199170 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:21:05.200681 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:21:05.201852 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:21:05.202149 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:21:05.203227 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:21:05.203514 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:21:05.204872 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:21:05.205074 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:21:05.206467 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:21:05.206640 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:21:05.207694 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 00:21:05.207858 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 00:21:05.215956 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:21:05.218817 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:21:05.221368 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:21:05.221730 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:21:05.223176 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:21:05.223350 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:21:05.233667 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:21:05.233838 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:21:05.258716 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:21:05.276064 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:21:05.295846 ignition[988]: INFO : Ignition 2.19.0 Nov 1 00:21:05.295846 ignition[988]: INFO : Stage: umount Nov 1 00:21:05.295846 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:05.295846 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:21:05.295846 ignition[988]: INFO : umount: umount passed Nov 1 00:21:05.295846 ignition[988]: INFO : Ignition finished successfully Nov 1 00:21:05.277441 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:21:05.296237 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:21:05.298567 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:21:05.300019 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:21:05.300133 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:21:05.302340 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:21:05.302485 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 1 00:21:05.304085 systemd[1]: Stopped target network.target - Network. Nov 1 00:21:05.305101 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:21:05.305209 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:21:05.306337 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:21:05.307349 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:21:05.307623 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:21:05.308568 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:21:05.309676 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:21:05.310913 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:21:05.310996 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:21:05.312042 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:21:05.312113 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:21:05.313083 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:21:05.313178 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:21:05.314314 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:21:05.314413 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:21:05.315791 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:21:05.316689 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:21:05.318562 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:21:05.318750 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:21:05.320315 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:21:05.320832 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:21:05.321464 systemd-networkd[749]: eth1: DHCPv6 lease lost Nov 1 00:21:05.327790 systemd-networkd[749]: eth0: DHCPv6 lease lost Nov 1 00:21:05.328435 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:21:05.329731 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:21:05.335066 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:21:05.335311 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:21:05.338874 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:21:05.338929 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:21:05.354668 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:21:05.356156 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:21:05.356283 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:21:05.357642 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:21:05.357736 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:21:05.358555 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:21:05.358642 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:21:05.360016 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:21:05.360094 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:21:05.361344 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:21:05.375132 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:21:05.375437 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:21:05.377831 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:21:05.377942 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:21:05.378792 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:21:05.378858 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:21:05.383609 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:21:05.383703 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:21:05.385568 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:21:05.385680 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:21:05.386948 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:21:05.387047 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:21:05.393029 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:21:05.393949 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:21:05.394060 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:21:05.395853 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 00:21:05.395934 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:21:05.396902 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:21:05.396965 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:21:05.397451 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:21:05.397491 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:05.398352 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:21:05.398495 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:21:05.425648 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:21:05.425859 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:21:05.428776 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:21:05.436813 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:21:05.452022 systemd[1]: Switching root. Nov 1 00:21:05.504811 systemd-journald[184]: Journal stopped Nov 1 00:21:06.995747 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Nov 1 00:21:06.995844 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:21:06.995863 kernel: SELinux: policy capability open_perms=1 Nov 1 00:21:06.995875 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:21:06.995888 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:21:06.995900 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:21:06.995937 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:21:06.995951 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:21:06.995965 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:21:06.995977 kernel: audit: type=1403 audit(1761956465.723:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:21:06.995997 systemd[1]: Successfully loaded SELinux policy in 51.359ms. Nov 1 00:21:06.996021 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.009ms. Nov 1 00:21:06.996036 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:21:06.996051 systemd[1]: Detected virtualization kvm. Nov 1 00:21:06.996071 systemd[1]: Detected architecture x86-64. Nov 1 00:21:06.996084 systemd[1]: Detected first boot. Nov 1 00:21:06.996098 systemd[1]: Hostname set to . Nov 1 00:21:06.996112 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:21:06.996125 zram_generator::config[1051]: No configuration found. Nov 1 00:21:06.996141 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:21:06.996155 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:21:06.996168 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 1 00:21:06.996189 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 00:21:06.996203 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 00:21:06.996216 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 00:21:06.996231 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 00:21:06.996245 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 00:21:06.996260 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 00:21:06.996273 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 00:21:06.996287 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 00:21:06.996301 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:21:06.996320 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:21:06.996334 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 00:21:06.996354 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 00:21:06.996383 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 00:21:06.996398 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:21:06.996412 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 00:21:06.996432 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:21:06.996451 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 00:21:06.996472 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:21:06.996506 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:21:06.996521 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:21:06.996534 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:21:06.996548 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 00:21:06.996562 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 00:21:06.996577 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:21:06.996600 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:21:06.996614 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:21:06.996628 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:21:06.996642 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:21:06.996656 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 00:21:06.996672 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 00:21:06.996685 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 00:21:06.996700 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 00:21:06.996714 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:06.996739 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 00:21:06.996760 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 00:21:06.996774 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 00:21:06.996788 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 00:21:06.996802 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:21:06.996832 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:21:06.996845 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 00:21:06.996859 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:21:06.996873 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:21:06.996891 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:21:06.996905 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 00:21:06.996918 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:21:06.996934 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:21:06.996948 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 1 00:21:06.996962 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 1 00:21:06.996976 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:21:06.996989 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:21:06.997010 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:21:06.997024 kernel: fuse: init (API version 7.39) Nov 1 00:21:06.997036 kernel: loop: module loaded Nov 1 00:21:06.997050 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 00:21:06.997064 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:21:06.997078 kernel: ACPI: bus type drm_connector registered Nov 1 00:21:06.997093 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:06.997106 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 00:21:06.997120 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 00:21:06.997140 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 00:21:06.997154 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 00:21:06.997173 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 00:21:06.997196 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 00:21:06.997273 systemd-journald[1137]: Collecting audit messages is disabled. Nov 1 00:21:06.997307 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 00:21:06.997321 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:21:06.997344 systemd-journald[1137]: Journal started Nov 1 00:21:06.999914 systemd-journald[1137]: Runtime Journal (/run/log/journal/7b417e96c9b845a1aba6a2cd06f18085) is 4.9M, max 39.3M, 34.4M free. Nov 1 00:21:07.000019 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:21:07.003458 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 00:21:07.003550 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:21:07.006040 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:21:07.006304 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:21:07.007385 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:21:07.007605 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:21:07.008626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:21:07.008826 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:21:07.010218 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:21:07.010470 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 00:21:07.011812 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:21:07.013992 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:21:07.019174 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:21:07.022224 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:21:07.024671 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 00:21:07.049016 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:21:07.058622 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 00:21:07.069763 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 00:21:07.072580 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:21:07.081914 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 00:21:07.101653 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 00:21:07.102289 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:21:07.109770 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 00:21:07.110509 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:21:07.113625 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:21:07.126646 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:21:07.131304 systemd-journald[1137]: Time spent on flushing to /var/log/journal/7b417e96c9b845a1aba6a2cd06f18085 is 48.529ms for 971 entries. Nov 1 00:21:07.131304 systemd-journald[1137]: System Journal (/var/log/journal/7b417e96c9b845a1aba6a2cd06f18085) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:21:07.194648 systemd-journald[1137]: Received client request to flush runtime journal. Nov 1 00:21:07.134966 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:21:07.139911 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 00:21:07.143496 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 00:21:07.152723 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 00:21:07.161927 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 00:21:07.162825 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 00:21:07.205106 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 00:21:07.236727 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 00:21:07.250288 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:21:07.256799 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Nov 1 00:21:07.256820 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Nov 1 00:21:07.263664 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:21:07.272876 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 00:21:07.315763 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 00:21:07.326737 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:21:07.355677 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Nov 1 00:21:07.355698 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Nov 1 00:21:07.363887 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:21:07.895406 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 00:21:07.902815 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:21:07.941743 systemd-udevd[1219]: Using default interface naming scheme 'v255'. Nov 1 00:21:07.964900 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:21:07.980506 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:21:08.011612 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 00:21:08.101755 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 00:21:08.132653 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 1 00:21:08.135805 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:08.136076 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:21:08.145069 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:21:08.155650 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:21:08.163123 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:21:08.166185 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:21:08.166245 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:21:08.166296 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:08.182399 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:21:08.182664 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:21:08.193970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:21:08.194219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:21:08.196854 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:21:08.207796 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:21:08.208041 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:21:08.208944 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:21:08.246424 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:21:08.262408 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1230) Nov 1 00:21:08.262525 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:21:08.290416 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 1 00:21:08.301061 systemd-networkd[1227]: lo: Link UP Nov 1 00:21:08.301714 systemd-networkd[1227]: lo: Gained carrier Nov 1 00:21:08.308024 systemd-networkd[1227]: Enumeration completed Nov 1 00:21:08.308525 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:21:08.310365 systemd-networkd[1227]: eth0: Configuring with /run/systemd/network/10-da:85:49:03:8d:19.network. Nov 1 00:21:08.311620 systemd-networkd[1227]: eth1: Configuring with /run/systemd/network/10-9e:bf:4e:33:95:eb.network. Nov 1 00:21:08.314851 systemd-networkd[1227]: eth0: Link UP Nov 1 00:21:08.314865 systemd-networkd[1227]: eth0: Gained carrier Nov 1 00:21:08.316904 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 00:21:08.320940 systemd-networkd[1227]: eth1: Link UP Nov 1 00:21:08.322449 systemd-networkd[1227]: eth1: Gained carrier Nov 1 00:21:08.349398 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 1 00:21:08.417423 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:21:08.424199 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 00:21:08.436824 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:08.439754 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 1 00:21:08.439849 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 1 00:21:08.447412 kernel: Console: switching to colour dummy device 80x25 Nov 1 00:21:08.449971 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 1 00:21:08.450047 kernel: [drm] features: -context_init Nov 1 00:21:08.454396 kernel: [drm] number of scanouts: 1 Nov 1 00:21:08.454519 kernel: [drm] number of cap sets: 0 Nov 1 00:21:08.457646 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Nov 1 00:21:08.467485 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 1 00:21:08.469344 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 00:21:08.473822 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:21:08.474261 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:08.479427 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 1 00:21:08.493818 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:08.512461 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:21:08.512759 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:08.524775 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:08.641467 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:21:08.670071 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 00:21:08.685847 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 00:21:08.688033 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:08.704483 lvm[1281]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:21:08.738621 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 00:21:08.741471 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:21:08.748770 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 00:21:08.762402 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:21:08.794049 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 00:21:08.795487 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:21:08.808763 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 1 00:21:08.808983 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:21:08.809037 systemd[1]: Reached target machines.target - Containers. Nov 1 00:21:08.811692 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 00:21:08.829421 kernel: ISO 9660 Extensions: RRIP_1991A Nov 1 00:21:08.836784 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 1 00:21:08.839956 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:21:08.842166 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 00:21:08.851833 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 00:21:08.854363 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 00:21:08.860449 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:21:08.867668 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 00:21:08.875753 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 00:21:08.889646 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 00:21:08.897322 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 00:21:08.907103 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:21:08.917745 kernel: loop0: detected capacity change from 0 to 224512 Nov 1 00:21:08.918557 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 00:21:08.950365 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:21:08.973418 kernel: loop1: detected capacity change from 0 to 142488 Nov 1 00:21:09.032444 kernel: loop2: detected capacity change from 0 to 8 Nov 1 00:21:09.056306 kernel: loop3: detected capacity change from 0 to 140768 Nov 1 00:21:09.099737 kernel: loop4: detected capacity change from 0 to 224512 Nov 1 00:21:09.127738 kernel: loop5: detected capacity change from 0 to 142488 Nov 1 00:21:09.152446 kernel: loop6: detected capacity change from 0 to 8 Nov 1 00:21:09.157757 kernel: loop7: detected capacity change from 0 to 140768 Nov 1 00:21:09.178654 (sd-merge)[1311]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Nov 1 00:21:09.179598 (sd-merge)[1311]: Merged extensions into '/usr'. Nov 1 00:21:09.203941 systemd[1]: Reloading requested from client PID 1300 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 00:21:09.203968 systemd[1]: Reloading... Nov 1 00:21:09.344851 zram_generator::config[1339]: No configuration found. Nov 1 00:21:09.505483 ldconfig[1297]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:21:09.538632 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:21:09.557578 systemd-networkd[1227]: eth0: Gained IPv6LL Nov 1 00:21:09.621548 systemd-networkd[1227]: eth1: Gained IPv6LL Nov 1 00:21:09.621872 systemd[1]: Reloading finished in 417 ms. Nov 1 00:21:09.638290 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 00:21:09.642015 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 00:21:09.645604 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 00:21:09.665766 systemd[1]: Starting ensure-sysext.service... Nov 1 00:21:09.672701 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:21:09.682055 systemd[1]: Reloading requested from client PID 1391 ('systemctl') (unit ensure-sysext.service)... Nov 1 00:21:09.682080 systemd[1]: Reloading... Nov 1 00:21:09.727668 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:21:09.728320 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 00:21:09.730190 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:21:09.730699 systemd-tmpfiles[1392]: ACLs are not supported, ignoring. Nov 1 00:21:09.730807 systemd-tmpfiles[1392]: ACLs are not supported, ignoring. Nov 1 00:21:09.734712 systemd-tmpfiles[1392]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:21:09.736471 systemd-tmpfiles[1392]: Skipping /boot Nov 1 00:21:09.750493 systemd-tmpfiles[1392]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:21:09.750710 systemd-tmpfiles[1392]: Skipping /boot Nov 1 00:21:09.809863 zram_generator::config[1419]: No configuration found. Nov 1 00:21:09.958212 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:21:10.035737 systemd[1]: Reloading finished in 351 ms. Nov 1 00:21:10.066590 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:21:10.085977 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:21:10.095698 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 00:21:10.109665 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 00:21:10.124276 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:21:10.140654 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 00:21:10.164231 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:10.164758 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:21:10.169123 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:21:10.181742 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:21:10.195847 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:21:10.198097 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:21:10.198336 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:10.214047 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:10.214737 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:21:10.215655 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:21:10.215908 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:10.224363 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 00:21:10.228134 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 00:21:10.239728 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:21:10.240018 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:21:10.257099 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:10.257901 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:21:10.267864 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:21:10.273000 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:21:10.283610 augenrules[1500]: No rules Nov 1 00:21:10.295803 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 00:21:10.299764 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:10.305972 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:21:10.313560 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 00:21:10.319469 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:21:10.319750 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:21:10.324834 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:21:10.325119 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:21:10.331900 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:21:10.332191 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:21:10.337286 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 00:21:10.348471 systemd[1]: Finished ensure-sysext.service. Nov 1 00:21:10.360962 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:21:10.361389 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:21:10.368912 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 00:21:10.379815 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:21:10.400022 systemd-resolved[1480]: Positive Trust Anchors: Nov 1 00:21:10.400711 systemd-resolved[1480]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:21:10.400775 systemd-resolved[1480]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:21:10.408596 systemd-resolved[1480]: Using system hostname 'ci-4081.3.6-n-f16f13e513'. Nov 1 00:21:10.412527 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:21:10.413555 systemd[1]: Reached target network.target - Network. Nov 1 00:21:10.414305 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 00:21:10.416158 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:21:10.479571 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 00:21:10.480465 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:21:10.481103 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 00:21:10.483550 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 00:21:10.484774 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 00:21:10.485748 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:21:10.485951 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:21:10.486885 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 00:21:10.487983 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 00:21:10.489041 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 00:21:10.489904 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:21:10.491977 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 00:21:10.497409 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 00:21:10.501861 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 00:21:10.507284 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 00:21:10.510174 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:21:10.512971 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:21:10.514082 systemd[1]: System is tainted: cgroupsv1 Nov 1 00:21:10.514157 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:21:10.514189 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:21:10.527604 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 00:21:10.538664 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 1 00:21:10.546697 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 00:21:10.556816 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 00:21:10.569757 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 00:21:10.570538 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 00:21:10.581028 jq[1532]: false Nov 1 00:21:10.584559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:10.601870 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 00:21:10.607330 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 00:21:10.619623 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 00:21:10.628856 dbus-daemon[1529]: [system] SELinux support is enabled Nov 1 00:21:10.636852 coreos-metadata[1528]: Nov 01 00:21:10.630 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:21:10.633785 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 00:21:10.649133 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 00:21:10.672417 coreos-metadata[1528]: Nov 01 00:21:10.650 INFO Fetch successful Nov 1 00:21:10.672585 extend-filesystems[1533]: Found loop4 Nov 1 00:21:10.672585 extend-filesystems[1533]: Found loop5 Nov 1 00:21:10.672585 extend-filesystems[1533]: Found loop6 Nov 1 00:21:10.672585 extend-filesystems[1533]: Found loop7 Nov 1 00:21:10.672585 extend-filesystems[1533]: Found vda Nov 1 00:21:10.672585 extend-filesystems[1533]: Found vda1 Nov 1 00:21:10.672585 extend-filesystems[1533]: Found vda2 Nov 1 00:21:10.672585 extend-filesystems[1533]: Found vda3 Nov 1 00:21:10.672585 extend-filesystems[1533]: Found usr Nov 1 00:21:10.672585 extend-filesystems[1533]: Found vda4 Nov 1 00:21:10.672585 extend-filesystems[1533]: Found vda6 Nov 1 00:21:10.672585 extend-filesystems[1533]: Found vda7 Nov 1 00:21:10.672585 extend-filesystems[1533]: Found vda9 Nov 1 00:21:10.672585 extend-filesystems[1533]: Checking size of /dev/vda9 Nov 1 00:21:10.690693 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 00:21:10.730620 extend-filesystems[1533]: Resized partition /dev/vda9 Nov 1 00:21:10.694968 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:21:10.713732 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 00:21:10.734601 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 00:21:10.745090 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 00:21:10.761453 extend-filesystems[1561]: resize2fs 1.47.1 (20-May-2024) Nov 1 00:21:10.776917 update_engine[1560]: I20251101 00:21:10.776818 1560 main.cc:92] Flatcar Update Engine starting Nov 1 00:21:10.781348 jq[1562]: true Nov 1 00:21:10.783632 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 1 00:21:10.790240 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:21:10.790658 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 00:21:10.799296 update_engine[1560]: I20251101 00:21:10.798475 1560 update_check_scheduler.cc:74] Next update check in 8m22s Nov 1 00:21:10.810714 systemd-timesyncd[1522]: Contacted time server 198.137.202.56:123 (0.flatcar.pool.ntp.org). Nov 1 00:21:10.810813 systemd-timesyncd[1522]: Initial clock synchronization to Sat 2025-11-01 00:21:10.773551 UTC. Nov 1 00:21:10.815283 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:21:10.815678 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 00:21:10.822097 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1232) Nov 1 00:21:10.824071 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 00:21:10.860100 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:21:10.860647 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 00:21:10.917849 (ntainerd)[1578]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 00:21:10.936513 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 1 00:21:10.959440 tar[1575]: linux-amd64/LICENSE Nov 1 00:21:10.959440 tar[1575]: linux-amd64/helm Nov 1 00:21:10.975683 systemd[1]: Started update-engine.service - Update Engine. Nov 1 00:21:10.976677 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 00:21:10.976803 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:21:10.976831 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 00:21:10.977314 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:21:10.979616 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 1 00:21:10.979669 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 00:21:10.980790 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:21:11.006471 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 1 00:21:11.006576 jq[1577]: true Nov 1 00:21:10.991054 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 00:21:11.043978 extend-filesystems[1561]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:21:11.043978 extend-filesystems[1561]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 1 00:21:11.043978 extend-filesystems[1561]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 1 00:21:11.059611 extend-filesystems[1533]: Resized filesystem in /dev/vda9 Nov 1 00:21:11.059611 extend-filesystems[1533]: Found vdb Nov 1 00:21:11.069040 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:21:11.070195 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 00:21:11.080172 systemd-logind[1552]: New seat seat0. Nov 1 00:21:11.095922 systemd-logind[1552]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:21:11.095958 systemd-logind[1552]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:21:11.096580 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 00:21:11.153501 sshd_keygen[1574]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:21:11.237325 bash[1636]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:21:11.245164 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 00:21:11.264784 systemd[1]: Starting sshkeys.service... Nov 1 00:21:11.343955 locksmithd[1595]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:21:11.347889 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 00:21:11.379747 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 00:21:11.396097 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 1 00:21:11.406117 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 1 00:21:11.462924 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:21:11.463309 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 00:21:11.478300 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 00:21:11.508889 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 00:21:11.528571 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 00:21:11.543046 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 00:21:11.544249 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 00:21:11.550404 coreos-metadata[1650]: Nov 01 00:21:11.549 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:21:11.569959 coreos-metadata[1650]: Nov 01 00:21:11.567 INFO Fetch successful Nov 1 00:21:11.586124 unknown[1650]: wrote ssh authorized keys file for user: core Nov 1 00:21:11.622062 update-ssh-keys[1664]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:21:11.626171 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 1 00:21:11.633937 systemd[1]: Finished sshkeys.service. Nov 1 00:21:11.650759 containerd[1578]: time="2025-11-01T00:21:11.649856145Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 00:21:11.713878 containerd[1578]: time="2025-11-01T00:21:11.713512803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:11.719671 containerd[1578]: time="2025-11-01T00:21:11.719586856Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:21:11.719671 containerd[1578]: time="2025-11-01T00:21:11.719649750Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:21:11.719671 containerd[1578]: time="2025-11-01T00:21:11.719681326Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:21:11.719960 containerd[1578]: time="2025-11-01T00:21:11.719933125Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 00:21:11.719999 containerd[1578]: time="2025-11-01T00:21:11.719969940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:11.720087 containerd[1578]: time="2025-11-01T00:21:11.720058998Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:21:11.720120 containerd[1578]: time="2025-11-01T00:21:11.720082989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:11.722455 containerd[1578]: time="2025-11-01T00:21:11.722109270Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:21:11.722455 containerd[1578]: time="2025-11-01T00:21:11.722158137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:11.722455 containerd[1578]: time="2025-11-01T00:21:11.722183914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:21:11.722455 containerd[1578]: time="2025-11-01T00:21:11.722201162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:11.723603 containerd[1578]: time="2025-11-01T00:21:11.723509209Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:11.725108 containerd[1578]: time="2025-11-01T00:21:11.723945885Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:11.725108 containerd[1578]: time="2025-11-01T00:21:11.724293910Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:21:11.725108 containerd[1578]: time="2025-11-01T00:21:11.724323001Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:21:11.726486 containerd[1578]: time="2025-11-01T00:21:11.726418480Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:21:11.726575 containerd[1578]: time="2025-11-01T00:21:11.726552875Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:21:11.736998 containerd[1578]: time="2025-11-01T00:21:11.735933981Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:21:11.736998 containerd[1578]: time="2025-11-01T00:21:11.736027613Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:21:11.736998 containerd[1578]: time="2025-11-01T00:21:11.736053812Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 00:21:11.736998 containerd[1578]: time="2025-11-01T00:21:11.736077648Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 00:21:11.736998 containerd[1578]: time="2025-11-01T00:21:11.736099101Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:21:11.736998 containerd[1578]: time="2025-11-01T00:21:11.736335874Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:21:11.743397 containerd[1578]: time="2025-11-01T00:21:11.741724107Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:21:11.743397 containerd[1578]: time="2025-11-01T00:21:11.741973493Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 00:21:11.743397 containerd[1578]: time="2025-11-01T00:21:11.742000160Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 00:21:11.743397 containerd[1578]: time="2025-11-01T00:21:11.742020568Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 00:21:11.743397 containerd[1578]: time="2025-11-01T00:21:11.742040219Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:21:11.743397 containerd[1578]: time="2025-11-01T00:21:11.742077663Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:21:11.743397 containerd[1578]: time="2025-11-01T00:21:11.742099135Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:21:11.743397 containerd[1578]: time="2025-11-01T00:21:11.742117733Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:21:11.743397 containerd[1578]: time="2025-11-01T00:21:11.742134525Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:21:11.743397 containerd[1578]: time="2025-11-01T00:21:11.742151109Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:21:11.743397 containerd[1578]: time="2025-11-01T00:21:11.742166514Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:21:11.743397 containerd[1578]: time="2025-11-01T00:21:11.742182316Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:21:11.743397 containerd[1578]: time="2025-11-01T00:21:11.742208220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:21:11.743397 containerd[1578]: time="2025-11-01T00:21:11.742226307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:21:11.743789 containerd[1578]: time="2025-11-01T00:21:11.742286652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:21:11.743789 containerd[1578]: time="2025-11-01T00:21:11.742306566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:21:11.743789 containerd[1578]: time="2025-11-01T00:21:11.742321389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:21:11.743789 containerd[1578]: time="2025-11-01T00:21:11.742340015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:21:11.743789 containerd[1578]: time="2025-11-01T00:21:11.742355439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:21:11.743789 containerd[1578]: time="2025-11-01T00:21:11.742386949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:21:11.743789 containerd[1578]: time="2025-11-01T00:21:11.742415008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 00:21:11.743789 containerd[1578]: time="2025-11-01T00:21:11.742452225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 00:21:11.743789 containerd[1578]: time="2025-11-01T00:21:11.742471258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:21:11.743789 containerd[1578]: time="2025-11-01T00:21:11.742485812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 00:21:11.743789 containerd[1578]: time="2025-11-01T00:21:11.742508169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:21:11.743789 containerd[1578]: time="2025-11-01T00:21:11.742542839Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 00:21:11.743789 containerd[1578]: time="2025-11-01T00:21:11.742573172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 00:21:11.743789 containerd[1578]: time="2025-11-01T00:21:11.742588262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:21:11.743789 containerd[1578]: time="2025-11-01T00:21:11.742602191Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:21:11.744133 containerd[1578]: time="2025-11-01T00:21:11.742657254Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:21:11.744133 containerd[1578]: time="2025-11-01T00:21:11.742686616Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 00:21:11.744133 containerd[1578]: time="2025-11-01T00:21:11.742700463Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:21:11.744133 containerd[1578]: time="2025-11-01T00:21:11.742715843Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 00:21:11.744133 containerd[1578]: time="2025-11-01T00:21:11.742730224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:21:11.744133 containerd[1578]: time="2025-11-01T00:21:11.742780335Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 00:21:11.744133 containerd[1578]: time="2025-11-01T00:21:11.742796984Z" level=info msg="NRI interface is disabled by configuration." Nov 1 00:21:11.744133 containerd[1578]: time="2025-11-01T00:21:11.742810896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:21:11.744301 containerd[1578]: time="2025-11-01T00:21:11.743118547Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:21:11.744301 containerd[1578]: time="2025-11-01T00:21:11.743189188Z" level=info msg="Connect containerd service" Nov 1 00:21:11.744301 containerd[1578]: time="2025-11-01T00:21:11.743251030Z" level=info msg="using legacy CRI server" Nov 1 00:21:11.744301 containerd[1578]: time="2025-11-01T00:21:11.743265013Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 00:21:11.744799 containerd[1578]: time="2025-11-01T00:21:11.744766437Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:21:11.746156 containerd[1578]: time="2025-11-01T00:21:11.746090338Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:21:11.746832 containerd[1578]: time="2025-11-01T00:21:11.746802719Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:21:11.746995 containerd[1578]: time="2025-11-01T00:21:11.746981928Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:21:11.747288 containerd[1578]: time="2025-11-01T00:21:11.747257100Z" level=info msg="Start subscribing containerd event" Nov 1 00:21:11.747455 containerd[1578]: time="2025-11-01T00:21:11.747437439Z" level=info msg="Start recovering state" Nov 1 00:21:11.747575 containerd[1578]: time="2025-11-01T00:21:11.747563962Z" level=info msg="Start event monitor" Nov 1 00:21:11.747627 containerd[1578]: time="2025-11-01T00:21:11.747618347Z" level=info msg="Start snapshots syncer" Nov 1 00:21:11.747669 containerd[1578]: time="2025-11-01T00:21:11.747661305Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:21:11.747711 containerd[1578]: time="2025-11-01T00:21:11.747703317Z" level=info msg="Start streaming server" Nov 1 00:21:11.747865 containerd[1578]: time="2025-11-01T00:21:11.747848066Z" level=info msg="containerd successfully booted in 0.099763s" Nov 1 00:21:11.748060 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 00:21:12.086459 tar[1575]: linux-amd64/README.md Nov 1 00:21:12.115330 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 00:21:12.539641 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:12.543851 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 00:21:12.545600 systemd[1]: Startup finished in 7.187s (kernel) + 6.872s (userspace) = 14.060s. Nov 1 00:21:12.553134 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:21:13.191496 kubelet[1688]: E1101 00:21:13.191405 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:21:13.195642 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:21:13.195975 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:21:14.294689 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 00:21:14.302830 systemd[1]: Started sshd@0-165.232.144.31:22-139.178.68.195:52134.service - OpenSSH per-connection server daemon (139.178.68.195:52134). Nov 1 00:21:14.373568 sshd[1700]: Accepted publickey for core from 139.178.68.195 port 52134 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:21:14.376137 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:21:14.389468 systemd-logind[1552]: New session 1 of user core. Nov 1 00:21:14.391544 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 00:21:14.400771 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 00:21:14.418884 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 00:21:14.435939 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 00:21:14.440150 (systemd)[1706]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:21:14.559679 systemd[1706]: Queued start job for default target default.target. Nov 1 00:21:14.560491 systemd[1706]: Created slice app.slice - User Application Slice. Nov 1 00:21:14.560523 systemd[1706]: Reached target paths.target - Paths. Nov 1 00:21:14.560537 systemd[1706]: Reached target timers.target - Timers. Nov 1 00:21:14.566560 systemd[1706]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 00:21:14.584595 systemd[1706]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 00:21:14.584697 systemd[1706]: Reached target sockets.target - Sockets. Nov 1 00:21:14.584719 systemd[1706]: Reached target basic.target - Basic System. Nov 1 00:21:14.584801 systemd[1706]: Reached target default.target - Main User Target. Nov 1 00:21:14.584853 systemd[1706]: Startup finished in 136ms. Nov 1 00:21:14.585938 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 00:21:14.590755 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 00:21:14.659983 systemd[1]: Started sshd@1-165.232.144.31:22-139.178.68.195:52150.service - OpenSSH per-connection server daemon (139.178.68.195:52150). Nov 1 00:21:14.711907 sshd[1718]: Accepted publickey for core from 139.178.68.195 port 52150 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:21:14.714191 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:21:14.721607 systemd-logind[1552]: New session 2 of user core. Nov 1 00:21:14.730959 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 00:21:14.799275 sshd[1718]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:14.805825 systemd[1]: sshd@1-165.232.144.31:22-139.178.68.195:52150.service: Deactivated successfully. Nov 1 00:21:14.809183 systemd-logind[1552]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:21:14.811336 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:21:14.815794 systemd[1]: Started sshd@2-165.232.144.31:22-139.178.68.195:52160.service - OpenSSH per-connection server daemon (139.178.68.195:52160). Nov 1 00:21:14.816940 systemd-logind[1552]: Removed session 2. Nov 1 00:21:14.860246 sshd[1726]: Accepted publickey for core from 139.178.68.195 port 52160 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:21:14.861910 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:21:14.868607 systemd-logind[1552]: New session 3 of user core. Nov 1 00:21:14.873897 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 00:21:14.934706 sshd[1726]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:14.946112 systemd[1]: Started sshd@3-165.232.144.31:22-139.178.68.195:52166.service - OpenSSH per-connection server daemon (139.178.68.195:52166). Nov 1 00:21:14.947229 systemd[1]: sshd@2-165.232.144.31:22-139.178.68.195:52160.service: Deactivated successfully. Nov 1 00:21:14.949541 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:21:14.951124 systemd-logind[1552]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:21:14.954198 systemd-logind[1552]: Removed session 3. Nov 1 00:21:14.988571 sshd[1732]: Accepted publickey for core from 139.178.68.195 port 52166 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:21:14.991056 sshd[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:21:14.999526 systemd-logind[1552]: New session 4 of user core. Nov 1 00:21:15.016086 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 00:21:15.084739 sshd[1732]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:15.092810 systemd[1]: Started sshd@4-165.232.144.31:22-139.178.68.195:52176.service - OpenSSH per-connection server daemon (139.178.68.195:52176). Nov 1 00:21:15.093548 systemd[1]: sshd@3-165.232.144.31:22-139.178.68.195:52166.service: Deactivated successfully. Nov 1 00:21:15.098700 systemd-logind[1552]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:21:15.100307 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:21:15.103190 systemd-logind[1552]: Removed session 4. Nov 1 00:21:15.157607 sshd[1739]: Accepted publickey for core from 139.178.68.195 port 52176 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:21:15.159705 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:21:15.166800 systemd-logind[1552]: New session 5 of user core. Nov 1 00:21:15.172951 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 00:21:15.251336 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:21:15.251918 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:21:15.267518 sudo[1746]: pam_unix(sudo:session): session closed for user root Nov 1 00:21:15.271660 sshd[1739]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:15.277349 systemd[1]: sshd@4-165.232.144.31:22-139.178.68.195:52176.service: Deactivated successfully. Nov 1 00:21:15.280733 systemd-logind[1552]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:21:15.289831 systemd[1]: Started sshd@5-165.232.144.31:22-139.178.68.195:52186.service - OpenSSH per-connection server daemon (139.178.68.195:52186). Nov 1 00:21:15.290331 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:21:15.291723 systemd-logind[1552]: Removed session 5. Nov 1 00:21:15.336214 sshd[1751]: Accepted publickey for core from 139.178.68.195 port 52186 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:21:15.338113 sshd[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:21:15.343197 systemd-logind[1552]: New session 6 of user core. Nov 1 00:21:15.354019 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 00:21:15.418179 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:21:15.418633 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:21:15.422991 sudo[1756]: pam_unix(sudo:session): session closed for user root Nov 1 00:21:15.430195 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:21:15.430534 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:21:15.447859 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 00:21:15.450983 auditctl[1759]: No rules Nov 1 00:21:15.451426 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:21:15.451690 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 00:21:15.455839 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:21:15.495322 augenrules[1778]: No rules Nov 1 00:21:15.496820 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:21:15.499175 sudo[1755]: pam_unix(sudo:session): session closed for user root Nov 1 00:21:15.504951 sshd[1751]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:15.510528 systemd[1]: sshd@5-165.232.144.31:22-139.178.68.195:52186.service: Deactivated successfully. Nov 1 00:21:15.514149 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:21:15.516589 systemd-logind[1552]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:21:15.520725 systemd[1]: Started sshd@6-165.232.144.31:22-139.178.68.195:52200.service - OpenSSH per-connection server daemon (139.178.68.195:52200). Nov 1 00:21:15.522002 systemd-logind[1552]: Removed session 6. Nov 1 00:21:15.570874 sshd[1787]: Accepted publickey for core from 139.178.68.195 port 52200 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:21:15.573133 sshd[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:21:15.580434 systemd-logind[1552]: New session 7 of user core. Nov 1 00:21:15.589116 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 00:21:15.653335 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:21:15.653771 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:21:16.078957 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 00:21:16.079048 (dockerd)[1806]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 00:21:16.551005 dockerd[1806]: time="2025-11-01T00:21:16.550815334Z" level=info msg="Starting up" Nov 1 00:21:16.785723 dockerd[1806]: time="2025-11-01T00:21:16.785653406Z" level=info msg="Loading containers: start." Nov 1 00:21:16.920512 kernel: Initializing XFRM netlink socket Nov 1 00:21:17.029138 systemd-networkd[1227]: docker0: Link UP Nov 1 00:21:17.048538 dockerd[1806]: time="2025-11-01T00:21:17.048473860Z" level=info msg="Loading containers: done." Nov 1 00:21:17.069688 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1016437775-merged.mount: Deactivated successfully. Nov 1 00:21:17.070108 dockerd[1806]: time="2025-11-01T00:21:17.070006365Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:21:17.072148 dockerd[1806]: time="2025-11-01T00:21:17.071654236Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 00:21:17.072148 dockerd[1806]: time="2025-11-01T00:21:17.071846907Z" level=info msg="Daemon has completed initialization" Nov 1 00:21:17.110428 dockerd[1806]: time="2025-11-01T00:21:17.110284737Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:21:17.110943 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 00:21:18.067971 containerd[1578]: time="2025-11-01T00:21:18.067688883Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:21:18.714755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount143650734.mount: Deactivated successfully. Nov 1 00:21:19.873631 containerd[1578]: time="2025-11-01T00:21:19.873568126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:19.874925 containerd[1578]: time="2025-11-01T00:21:19.874793594Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 1 00:21:19.877399 containerd[1578]: time="2025-11-01T00:21:19.875397014Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:19.878474 containerd[1578]: time="2025-11-01T00:21:19.878434179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:19.879749 containerd[1578]: time="2025-11-01T00:21:19.879702132Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.811962783s" Nov 1 00:21:19.879925 containerd[1578]: time="2025-11-01T00:21:19.879901255Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 00:21:19.880602 containerd[1578]: time="2025-11-01T00:21:19.880563099Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:21:21.373588 containerd[1578]: time="2025-11-01T00:21:21.373519424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:21.374657 containerd[1578]: time="2025-11-01T00:21:21.374606015Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 1 00:21:21.375601 containerd[1578]: time="2025-11-01T00:21:21.375569724Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:21.378623 containerd[1578]: time="2025-11-01T00:21:21.378582222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:21.380079 containerd[1578]: time="2025-11-01T00:21:21.380043889Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.499335682s" Nov 1 00:21:21.380192 containerd[1578]: time="2025-11-01T00:21:21.380176976Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 00:21:21.380898 containerd[1578]: time="2025-11-01T00:21:21.380877033Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:21:22.645924 containerd[1578]: time="2025-11-01T00:21:22.645835342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:22.647901 containerd[1578]: time="2025-11-01T00:21:22.647714760Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 1 00:21:22.648698 containerd[1578]: time="2025-11-01T00:21:22.648617278Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:22.655937 containerd[1578]: time="2025-11-01T00:21:22.655666259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:22.657427 containerd[1578]: time="2025-11-01T00:21:22.657070252Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.275961614s" Nov 1 00:21:22.657427 containerd[1578]: time="2025-11-01T00:21:22.657144764Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 00:21:22.659263 containerd[1578]: time="2025-11-01T00:21:22.659091907Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:21:23.340659 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:21:23.350225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:23.630212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:23.646880 (kubelet)[2034]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:21:23.776456 kubelet[2034]: E1101 00:21:23.776317 2034 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:21:23.785737 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:21:23.786094 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:21:24.067262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1011055249.mount: Deactivated successfully. Nov 1 00:21:24.763252 containerd[1578]: time="2025-11-01T00:21:24.763173415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:24.766209 containerd[1578]: time="2025-11-01T00:21:24.766083451Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 1 00:21:24.767481 containerd[1578]: time="2025-11-01T00:21:24.767246126Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:24.769947 containerd[1578]: time="2025-11-01T00:21:24.769811204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:24.771700 containerd[1578]: time="2025-11-01T00:21:24.771026653Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.11178363s" Nov 1 00:21:24.771700 containerd[1578]: time="2025-11-01T00:21:24.771087147Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 00:21:24.772007 containerd[1578]: time="2025-11-01T00:21:24.771747420Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:21:24.773809 systemd-resolved[1480]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 1 00:21:25.306043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount154176852.mount: Deactivated successfully. Nov 1 00:21:26.433591 containerd[1578]: time="2025-11-01T00:21:26.432935737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:26.435487 containerd[1578]: time="2025-11-01T00:21:26.435021011Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 1 00:21:26.437416 containerd[1578]: time="2025-11-01T00:21:26.436366695Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:26.440550 containerd[1578]: time="2025-11-01T00:21:26.440462413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:26.442577 containerd[1578]: time="2025-11-01T00:21:26.442519908Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.670725149s" Nov 1 00:21:26.442825 containerd[1578]: time="2025-11-01T00:21:26.442798254Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 00:21:26.443682 containerd[1578]: time="2025-11-01T00:21:26.443609097Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:21:27.020070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3053358052.mount: Deactivated successfully. Nov 1 00:21:27.025534 containerd[1578]: time="2025-11-01T00:21:27.024530811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:27.026507 containerd[1578]: time="2025-11-01T00:21:27.026162664Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 1 00:21:27.027429 containerd[1578]: time="2025-11-01T00:21:27.027346186Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:27.030444 containerd[1578]: time="2025-11-01T00:21:27.030367250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:27.032402 containerd[1578]: time="2025-11-01T00:21:27.032147480Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 588.323068ms" Nov 1 00:21:27.032402 containerd[1578]: time="2025-11-01T00:21:27.032220973Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:21:27.033667 containerd[1578]: time="2025-11-01T00:21:27.033284644Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:21:27.485820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1748459841.mount: Deactivated successfully. Nov 1 00:21:27.861601 systemd-resolved[1480]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 1 00:21:29.400834 containerd[1578]: time="2025-11-01T00:21:29.400762698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:29.402100 containerd[1578]: time="2025-11-01T00:21:29.402047451Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 1 00:21:29.404157 containerd[1578]: time="2025-11-01T00:21:29.402632170Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:29.406254 containerd[1578]: time="2025-11-01T00:21:29.406217267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:29.407662 containerd[1578]: time="2025-11-01T00:21:29.407622729Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.374285332s" Nov 1 00:21:29.407754 containerd[1578]: time="2025-11-01T00:21:29.407664330Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 00:21:32.529711 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:32.542796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:32.590650 systemd[1]: Reloading requested from client PID 2185 ('systemctl') (unit session-7.scope)... Nov 1 00:21:32.591031 systemd[1]: Reloading... Nov 1 00:21:32.746415 zram_generator::config[2225]: No configuration found. Nov 1 00:21:32.942103 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:21:33.079935 systemd[1]: Reloading finished in 488 ms. Nov 1 00:21:33.138420 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:21:33.138519 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:21:33.138908 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:33.148425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:33.358910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:33.360606 (kubelet)[2287]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:21:33.427323 kubelet[2287]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:21:33.430036 kubelet[2287]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:21:33.430036 kubelet[2287]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:21:33.430036 kubelet[2287]: I1101 00:21:33.427494 2287 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:21:33.632070 kubelet[2287]: I1101 00:21:33.631908 2287 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:21:33.632409 kubelet[2287]: I1101 00:21:33.632388 2287 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:21:33.632852 kubelet[2287]: I1101 00:21:33.632827 2287 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:21:33.678914 kubelet[2287]: E1101 00:21:33.678848 2287 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://165.232.144.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 165.232.144.31:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:21:33.679162 kubelet[2287]: I1101 00:21:33.679027 2287 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:21:33.689844 kubelet[2287]: E1101 00:21:33.689794 2287 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:21:33.689844 kubelet[2287]: I1101 00:21:33.689840 2287 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:21:33.694648 kubelet[2287]: I1101 00:21:33.694602 2287 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:21:33.696960 kubelet[2287]: I1101 00:21:33.696826 2287 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:21:33.697217 kubelet[2287]: I1101 00:21:33.696931 2287 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-f16f13e513","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:21:33.697330 kubelet[2287]: I1101 00:21:33.697230 2287 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:21:33.697330 kubelet[2287]: I1101 00:21:33.697246 2287 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:21:33.697487 kubelet[2287]: I1101 00:21:33.697466 2287 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:21:33.702860 kubelet[2287]: I1101 00:21:33.702673 2287 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:21:33.702860 kubelet[2287]: I1101 00:21:33.702748 2287 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:21:33.702860 kubelet[2287]: I1101 00:21:33.702781 2287 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:21:33.702860 kubelet[2287]: I1101 00:21:33.702799 2287 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:21:33.711878 kubelet[2287]: W1101 00:21:33.711613 2287 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://165.232.144.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-f16f13e513&limit=500&resourceVersion=0": dial tcp 165.232.144.31:6443: connect: connection refused Nov 1 00:21:33.712451 kubelet[2287]: E1101 00:21:33.712299 2287 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://165.232.144.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-f16f13e513&limit=500&resourceVersion=0\": dial tcp 165.232.144.31:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:21:33.712737 kubelet[2287]: I1101 00:21:33.712578 2287 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:21:33.716364 kubelet[2287]: I1101 00:21:33.716317 2287 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:21:33.718430 kubelet[2287]: W1101 00:21:33.717165 2287 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:21:33.718890 kubelet[2287]: I1101 00:21:33.718866 2287 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:21:33.719013 kubelet[2287]: I1101 00:21:33.719002 2287 server.go:1287] "Started kubelet" Nov 1 00:21:33.723039 kubelet[2287]: W1101 00:21:33.722497 2287 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://165.232.144.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 165.232.144.31:6443: connect: connection refused Nov 1 00:21:33.723039 kubelet[2287]: E1101 00:21:33.722589 2287 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://165.232.144.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 165.232.144.31:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:21:33.723039 kubelet[2287]: I1101 00:21:33.722777 2287 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:21:33.726037 kubelet[2287]: I1101 00:21:33.725986 2287 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:21:33.726411 kubelet[2287]: I1101 00:21:33.726336 2287 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:21:33.726890 kubelet[2287]: I1101 00:21:33.726871 2287 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:21:33.731017 kubelet[2287]: E1101 00:21:33.729608 2287 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://165.232.144.31:6443/api/v1/namespaces/default/events\": dial tcp 165.232.144.31:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-f16f13e513.1873ba1885cffe79 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-f16f13e513,UID:ci-4081.3.6-n-f16f13e513,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-f16f13e513,},FirstTimestamp:2025-11-01 00:21:33.718978169 +0000 UTC m=+0.347143142,LastTimestamp:2025-11-01 00:21:33.718978169 +0000 UTC m=+0.347143142,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-f16f13e513,}" Nov 1 00:21:33.733733 kubelet[2287]: I1101 00:21:33.733705 2287 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:21:33.737452 kubelet[2287]: I1101 00:21:33.737404 2287 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:21:33.738085 kubelet[2287]: E1101 00:21:33.737838 2287 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f16f13e513\" not found" Nov 1 00:21:33.740849 kubelet[2287]: I1101 00:21:33.740809 2287 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:21:33.740998 kubelet[2287]: I1101 00:21:33.740965 2287 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:21:33.744439 kubelet[2287]: I1101 00:21:33.743367 2287 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:21:33.748080 kubelet[2287]: W1101 00:21:33.747439 2287 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://165.232.144.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.144.31:6443: connect: connection refused Nov 1 00:21:33.748080 kubelet[2287]: E1101 00:21:33.747573 2287 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://165.232.144.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 165.232.144.31:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:21:33.748080 kubelet[2287]: E1101 00:21:33.747679 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.144.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-f16f13e513?timeout=10s\": dial tcp 165.232.144.31:6443: connect: connection refused" interval="200ms" Nov 1 00:21:33.748080 kubelet[2287]: I1101 00:21:33.747975 2287 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:21:33.748438 kubelet[2287]: I1101 00:21:33.748108 2287 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:21:33.755895 kubelet[2287]: E1101 00:21:33.755831 2287 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:21:33.756626 kubelet[2287]: I1101 00:21:33.756596 2287 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:21:33.765209 kubelet[2287]: I1101 00:21:33.765131 2287 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:21:33.767268 kubelet[2287]: I1101 00:21:33.767228 2287 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:21:33.767593 kubelet[2287]: I1101 00:21:33.767580 2287 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:21:33.767706 kubelet[2287]: I1101 00:21:33.767694 2287 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:21:33.767775 kubelet[2287]: I1101 00:21:33.767767 2287 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:21:33.767922 kubelet[2287]: E1101 00:21:33.767899 2287 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:21:33.773730 kubelet[2287]: W1101 00:21:33.773656 2287 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://165.232.144.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.144.31:6443: connect: connection refused Nov 1 00:21:33.774029 kubelet[2287]: E1101 00:21:33.773978 2287 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://165.232.144.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 165.232.144.31:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:21:33.792254 kubelet[2287]: I1101 00:21:33.792212 2287 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:21:33.792928 kubelet[2287]: I1101 00:21:33.792577 2287 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:21:33.792928 kubelet[2287]: I1101 00:21:33.792618 2287 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:21:33.795662 kubelet[2287]: I1101 00:21:33.795629 2287 policy_none.go:49] "None policy: Start" Nov 1 00:21:33.795846 kubelet[2287]: I1101 00:21:33.795830 2287 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:21:33.795961 kubelet[2287]: I1101 00:21:33.795946 2287 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:21:33.803854 kubelet[2287]: I1101 00:21:33.803799 2287 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:21:33.804335 kubelet[2287]: I1101 00:21:33.804310 2287 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:21:33.806093 kubelet[2287]: I1101 00:21:33.804460 2287 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:21:33.806093 kubelet[2287]: I1101 00:21:33.805902 2287 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:21:33.807136 kubelet[2287]: E1101 00:21:33.807118 2287 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:21:33.807304 kubelet[2287]: E1101 00:21:33.807282 2287 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-f16f13e513\" not found" Nov 1 00:21:33.876328 kubelet[2287]: E1101 00:21:33.876250 2287 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f16f13e513\" not found" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:33.876672 kubelet[2287]: E1101 00:21:33.876626 2287 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f16f13e513\" not found" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:33.881407 kubelet[2287]: E1101 00:21:33.881357 2287 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f16f13e513\" not found" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:33.905880 kubelet[2287]: I1101 00:21:33.905740 2287 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:33.907489 kubelet[2287]: E1101 00:21:33.906549 2287 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://165.232.144.31:6443/api/v1/nodes\": dial tcp 165.232.144.31:6443: connect: connection refused" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:33.941334 kubelet[2287]: I1101 00:21:33.941231 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08b7850b5a1a8dcff0f462918a61c2bf-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-f16f13e513\" (UID: \"08b7850b5a1a8dcff0f462918a61c2bf\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:33.941334 kubelet[2287]: I1101 00:21:33.941308 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/08b7850b5a1a8dcff0f462918a61c2bf-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-f16f13e513\" (UID: \"08b7850b5a1a8dcff0f462918a61c2bf\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:33.941334 kubelet[2287]: I1101 00:21:33.941336 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e11f132adaf5a9c6eaa1f76daf3f9733-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-f16f13e513\" (UID: \"e11f132adaf5a9c6eaa1f76daf3f9733\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:33.941587 kubelet[2287]: I1101 00:21:33.941392 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08b7850b5a1a8dcff0f462918a61c2bf-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-f16f13e513\" (UID: \"08b7850b5a1a8dcff0f462918a61c2bf\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:33.941587 kubelet[2287]: I1101 00:21:33.941419 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1bef007f0a3127fa2613097fd6b71668-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-f16f13e513\" (UID: \"1bef007f0a3127fa2613097fd6b71668\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:33.942510 kubelet[2287]: I1101 00:21:33.942461 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1bef007f0a3127fa2613097fd6b71668-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-f16f13e513\" (UID: \"1bef007f0a3127fa2613097fd6b71668\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:33.942611 kubelet[2287]: I1101 00:21:33.942528 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/08b7850b5a1a8dcff0f462918a61c2bf-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-f16f13e513\" (UID: \"08b7850b5a1a8dcff0f462918a61c2bf\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:33.942611 kubelet[2287]: I1101 00:21:33.942563 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08b7850b5a1a8dcff0f462918a61c2bf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-f16f13e513\" (UID: \"08b7850b5a1a8dcff0f462918a61c2bf\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:33.942611 kubelet[2287]: I1101 00:21:33.942598 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1bef007f0a3127fa2613097fd6b71668-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-f16f13e513\" (UID: \"1bef007f0a3127fa2613097fd6b71668\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:33.950255 kubelet[2287]: E1101 00:21:33.950188 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.144.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-f16f13e513?timeout=10s\": dial tcp 165.232.144.31:6443: connect: connection refused" interval="400ms" Nov 1 00:21:34.108361 kubelet[2287]: I1101 00:21:34.108316 2287 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:34.108907 kubelet[2287]: E1101 00:21:34.108871 2287 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://165.232.144.31:6443/api/v1/nodes\": dial tcp 165.232.144.31:6443: connect: connection refused" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:34.179485 kubelet[2287]: E1101 00:21:34.179272 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:34.180221 kubelet[2287]: E1101 00:21:34.179926 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:34.180991 containerd[1578]: time="2025-11-01T00:21:34.180712839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-f16f13e513,Uid:08b7850b5a1a8dcff0f462918a61c2bf,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:34.180991 containerd[1578]: time="2025-11-01T00:21:34.180780858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-f16f13e513,Uid:1bef007f0a3127fa2613097fd6b71668,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:34.183410 kubelet[2287]: E1101 00:21:34.183107 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:34.183342 systemd-resolved[1480]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Nov 1 00:21:34.187222 containerd[1578]: time="2025-11-01T00:21:34.187170089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-f16f13e513,Uid:e11f132adaf5a9c6eaa1f76daf3f9733,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:34.350992 kubelet[2287]: E1101 00:21:34.350913 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.144.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-f16f13e513?timeout=10s\": dial tcp 165.232.144.31:6443: connect: connection refused" interval="800ms" Nov 1 00:21:34.511323 kubelet[2287]: I1101 00:21:34.510890 2287 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:34.512265 kubelet[2287]: E1101 00:21:34.511312 2287 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://165.232.144.31:6443/api/v1/nodes\": dial tcp 165.232.144.31:6443: connect: connection refused" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:34.604942 kubelet[2287]: W1101 00:21:34.604780 2287 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://165.232.144.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-f16f13e513&limit=500&resourceVersion=0": dial tcp 165.232.144.31:6443: connect: connection refused Nov 1 00:21:34.604942 kubelet[2287]: E1101 00:21:34.604879 2287 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://165.232.144.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-f16f13e513&limit=500&resourceVersion=0\": dial tcp 165.232.144.31:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:21:34.631764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1198756581.mount: Deactivated successfully. Nov 1 00:21:34.638408 containerd[1578]: time="2025-11-01T00:21:34.636068534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:21:34.638408 containerd[1578]: time="2025-11-01T00:21:34.636965030Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 1 00:21:34.638408 containerd[1578]: time="2025-11-01T00:21:34.637503749Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:21:34.638623 containerd[1578]: time="2025-11-01T00:21:34.638566896Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:21:34.639189 containerd[1578]: time="2025-11-01T00:21:34.639151377Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:21:34.639758 containerd[1578]: time="2025-11-01T00:21:34.639724654Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:21:34.640670 containerd[1578]: time="2025-11-01T00:21:34.640629832Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 459.836386ms" Nov 1 00:21:34.641841 containerd[1578]: time="2025-11-01T00:21:34.641802225Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:21:34.643254 containerd[1578]: time="2025-11-01T00:21:34.643209424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:21:34.656638 containerd[1578]: time="2025-11-01T00:21:34.656577527Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 475.711354ms" Nov 1 00:21:34.659764 containerd[1578]: time="2025-11-01T00:21:34.659706382Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 472.428333ms" Nov 1 00:21:34.674673 kubelet[2287]: W1101 00:21:34.674598 2287 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://165.232.144.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.144.31:6443: connect: connection refused Nov 1 00:21:34.674920 kubelet[2287]: E1101 00:21:34.674894 2287 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://165.232.144.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 165.232.144.31:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:21:34.731085 kubelet[2287]: W1101 00:21:34.731013 2287 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://165.232.144.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 165.232.144.31:6443: connect: connection refused Nov 1 00:21:34.731398 kubelet[2287]: E1101 00:21:34.731362 2287 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://165.232.144.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 165.232.144.31:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:21:34.842178 containerd[1578]: time="2025-11-01T00:21:34.841298434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:34.842687 containerd[1578]: time="2025-11-01T00:21:34.842493539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:34.843210 containerd[1578]: time="2025-11-01T00:21:34.842922091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:34.843210 containerd[1578]: time="2025-11-01T00:21:34.843075415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:34.845871 containerd[1578]: time="2025-11-01T00:21:34.845766699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:34.846618 containerd[1578]: time="2025-11-01T00:21:34.846467839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:34.849397 containerd[1578]: time="2025-11-01T00:21:34.848557555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:34.849397 containerd[1578]: time="2025-11-01T00:21:34.848593356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:34.849397 containerd[1578]: time="2025-11-01T00:21:34.848765783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:34.850050 containerd[1578]: time="2025-11-01T00:21:34.849741226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:34.850050 containerd[1578]: time="2025-11-01T00:21:34.849778910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:34.850050 containerd[1578]: time="2025-11-01T00:21:34.849935885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:34.986807 containerd[1578]: time="2025-11-01T00:21:34.986751060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-f16f13e513,Uid:08b7850b5a1a8dcff0f462918a61c2bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"64f71e405298e10d19644fb79c63d355dc4b8f70da9fe9130c25b6289276a809\"" Nov 1 00:21:34.990020 kubelet[2287]: E1101 00:21:34.989855 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:34.993014 containerd[1578]: time="2025-11-01T00:21:34.992715573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-f16f13e513,Uid:e11f132adaf5a9c6eaa1f76daf3f9733,Namespace:kube-system,Attempt:0,} returns sandbox id \"73246eee9e3984a3773fde9927ec8ce208f60bef4c1fdb2b5b177b644e15fbf9\"" Nov 1 00:21:34.994765 containerd[1578]: time="2025-11-01T00:21:34.994298513Z" level=info msg="CreateContainer within sandbox \"64f71e405298e10d19644fb79c63d355dc4b8f70da9fe9130c25b6289276a809\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:21:34.996100 kubelet[2287]: E1101 00:21:34.995742 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:34.998294 containerd[1578]: time="2025-11-01T00:21:34.998258205Z" level=info msg="CreateContainer within sandbox \"73246eee9e3984a3773fde9927ec8ce208f60bef4c1fdb2b5b177b644e15fbf9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:21:35.005559 containerd[1578]: time="2025-11-01T00:21:35.005290182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-f16f13e513,Uid:1bef007f0a3127fa2613097fd6b71668,Namespace:kube-system,Attempt:0,} returns sandbox id \"418780da17fda2ec672bc6d6f93fed2e77c1f1c38b98c9da52008fb585f907b2\"" Nov 1 00:21:35.008597 kubelet[2287]: E1101 00:21:35.008342 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:35.015167 containerd[1578]: time="2025-11-01T00:21:35.015111508Z" level=info msg="CreateContainer within sandbox \"418780da17fda2ec672bc6d6f93fed2e77c1f1c38b98c9da52008fb585f907b2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:21:35.019979 containerd[1578]: time="2025-11-01T00:21:35.019706442Z" level=info msg="CreateContainer within sandbox \"64f71e405298e10d19644fb79c63d355dc4b8f70da9fe9130c25b6289276a809\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d96616657d4698d95c68d089cc6db89ccbd59f3229f7ebfd919b0e1adca1a572\"" Nov 1 00:21:35.020659 containerd[1578]: time="2025-11-01T00:21:35.020621997Z" level=info msg="StartContainer for \"d96616657d4698d95c68d089cc6db89ccbd59f3229f7ebfd919b0e1adca1a572\"" Nov 1 00:21:35.025677 containerd[1578]: time="2025-11-01T00:21:35.025464064Z" level=info msg="CreateContainer within sandbox \"73246eee9e3984a3773fde9927ec8ce208f60bef4c1fdb2b5b177b644e15fbf9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"76f7f451034e86d0cb2c0ca50bab0a9d044b4e7053311aaefa2cf8c86d3ca391\"" Nov 1 00:21:35.026905 containerd[1578]: time="2025-11-01T00:21:35.026831464Z" level=info msg="StartContainer for \"76f7f451034e86d0cb2c0ca50bab0a9d044b4e7053311aaefa2cf8c86d3ca391\"" Nov 1 00:21:35.035779 containerd[1578]: time="2025-11-01T00:21:35.035701608Z" level=info msg="CreateContainer within sandbox \"418780da17fda2ec672bc6d6f93fed2e77c1f1c38b98c9da52008fb585f907b2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"09aa1c8b6c64de3b0093bd3eaa2c65ecb202fa75ae944461838475d0d5171ba6\"" Nov 1 00:21:35.037240 containerd[1578]: time="2025-11-01T00:21:35.037114398Z" level=info msg="StartContainer for \"09aa1c8b6c64de3b0093bd3eaa2c65ecb202fa75ae944461838475d0d5171ba6\"" Nov 1 00:21:35.154034 kubelet[2287]: E1101 00:21:35.151869 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.144.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-f16f13e513?timeout=10s\": dial tcp 165.232.144.31:6443: connect: connection refused" interval="1.6s" Nov 1 00:21:35.162983 containerd[1578]: time="2025-11-01T00:21:35.162862938Z" level=info msg="StartContainer for \"76f7f451034e86d0cb2c0ca50bab0a9d044b4e7053311aaefa2cf8c86d3ca391\" returns successfully" Nov 1 00:21:35.203834 containerd[1578]: time="2025-11-01T00:21:35.203308117Z" level=info msg="StartContainer for \"d96616657d4698d95c68d089cc6db89ccbd59f3229f7ebfd919b0e1adca1a572\" returns successfully" Nov 1 00:21:35.250840 containerd[1578]: time="2025-11-01T00:21:35.250714150Z" level=info msg="StartContainer for \"09aa1c8b6c64de3b0093bd3eaa2c65ecb202fa75ae944461838475d0d5171ba6\" returns successfully" Nov 1 00:21:35.314890 kubelet[2287]: I1101 00:21:35.314159 2287 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:35.316336 kubelet[2287]: E1101 00:21:35.316215 2287 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://165.232.144.31:6443/api/v1/nodes\": dial tcp 165.232.144.31:6443: connect: connection refused" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:35.348609 kubelet[2287]: W1101 00:21:35.348478 2287 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://165.232.144.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.144.31:6443: connect: connection refused Nov 1 00:21:35.348609 kubelet[2287]: E1101 00:21:35.348571 2287 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://165.232.144.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 165.232.144.31:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:21:35.802593 kubelet[2287]: E1101 00:21:35.801811 2287 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f16f13e513\" not found" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:35.802593 kubelet[2287]: E1101 00:21:35.802064 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:35.809950 kubelet[2287]: E1101 00:21:35.808368 2287 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f16f13e513\" not found" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:35.809950 kubelet[2287]: E1101 00:21:35.808538 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:35.813395 kubelet[2287]: E1101 00:21:35.813001 2287 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f16f13e513\" not found" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:35.813395 kubelet[2287]: E1101 00:21:35.813180 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:36.816472 kubelet[2287]: E1101 00:21:36.815247 2287 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f16f13e513\" not found" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:36.818313 kubelet[2287]: E1101 00:21:36.817167 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:36.818313 kubelet[2287]: E1101 00:21:36.817654 2287 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f16f13e513\" not found" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:36.818313 kubelet[2287]: E1101 00:21:36.817830 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:36.919638 kubelet[2287]: I1101 00:21:36.919108 2287 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:37.075643 kubelet[2287]: E1101 00:21:37.072853 2287 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f16f13e513\" not found" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:37.075643 kubelet[2287]: E1101 00:21:37.073044 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:37.822350 kubelet[2287]: E1101 00:21:37.819925 2287 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f16f13e513\" not found" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:37.822350 kubelet[2287]: E1101 00:21:37.820325 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:37.893434 kubelet[2287]: E1101 00:21:37.892421 2287 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-f16f13e513\" not found" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:37.906213 kubelet[2287]: I1101 00:21:37.905576 2287 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:37.906213 kubelet[2287]: E1101 00:21:37.905627 2287 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-f16f13e513\": node \"ci-4081.3.6-n-f16f13e513\" not found" Nov 1 00:21:37.941435 kubelet[2287]: I1101 00:21:37.941384 2287 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:38.000033 kubelet[2287]: E1101 00:21:37.999879 2287 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-f16f13e513\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:38.000033 kubelet[2287]: I1101 00:21:37.999921 2287 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:38.002311 kubelet[2287]: E1101 00:21:38.002090 2287 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-f16f13e513\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:38.002311 kubelet[2287]: I1101 00:21:38.002125 2287 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:38.004230 kubelet[2287]: E1101 00:21:38.004181 2287 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-f16f13e513\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:38.735116 kubelet[2287]: I1101 00:21:38.734006 2287 apiserver.go:52] "Watching apiserver" Nov 1 00:21:38.741349 kubelet[2287]: I1101 00:21:38.741288 2287 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:21:40.317599 kubelet[2287]: I1101 00:21:40.317296 2287 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:40.327091 kubelet[2287]: W1101 00:21:40.327043 2287 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:21:40.329395 kubelet[2287]: E1101 00:21:40.328112 2287 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:40.330021 systemd[1]: Reloading requested from client PID 2564 ('systemctl') (unit session-7.scope)... Nov 1 00:21:40.330038 systemd[1]: Reloading... Nov 1 00:21:40.436571 zram_generator::config[2606]: No configuration found. Nov 1 00:21:40.594444 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:21:40.695921 systemd[1]: Reloading finished in 364 ms. Nov 1 00:21:40.743894 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:40.768202 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:21:40.768914 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:40.780552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:40.926684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:40.941298 (kubelet)[2664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:21:41.066778 kubelet[2664]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:21:41.066778 kubelet[2664]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:21:41.066778 kubelet[2664]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:21:41.066778 kubelet[2664]: I1101 00:21:41.066761 2664 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:21:41.077756 kubelet[2664]: I1101 00:21:41.077688 2664 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:21:41.077756 kubelet[2664]: I1101 00:21:41.077737 2664 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:21:41.078211 kubelet[2664]: I1101 00:21:41.078183 2664 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:21:41.085955 kubelet[2664]: I1101 00:21:41.085906 2664 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:21:41.091780 kubelet[2664]: I1101 00:21:41.091392 2664 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:21:41.098070 kubelet[2664]: E1101 00:21:41.097995 2664 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:21:41.098070 kubelet[2664]: I1101 00:21:41.098056 2664 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:21:41.102439 kubelet[2664]: I1101 00:21:41.102310 2664 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:21:41.103357 kubelet[2664]: I1101 00:21:41.103191 2664 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:21:41.103511 kubelet[2664]: I1101 00:21:41.103238 2664 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-f16f13e513","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:21:41.103511 kubelet[2664]: I1101 00:21:41.103508 2664 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:21:41.103695 kubelet[2664]: I1101 00:21:41.103519 2664 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:21:41.103695 kubelet[2664]: I1101 00:21:41.103593 2664 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:21:41.104133 kubelet[2664]: I1101 00:21:41.103791 2664 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:21:41.104133 kubelet[2664]: I1101 00:21:41.103822 2664 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:21:41.104876 kubelet[2664]: I1101 00:21:41.104796 2664 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:21:41.104876 kubelet[2664]: I1101 00:21:41.104827 2664 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:21:41.108060 kubelet[2664]: I1101 00:21:41.107715 2664 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:21:41.110500 kubelet[2664]: I1101 00:21:41.108937 2664 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:21:41.110500 kubelet[2664]: I1101 00:21:41.110193 2664 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:21:41.110500 kubelet[2664]: I1101 00:21:41.110228 2664 server.go:1287] "Started kubelet" Nov 1 00:21:41.116762 kubelet[2664]: I1101 00:21:41.116684 2664 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:21:41.134838 kubelet[2664]: E1101 00:21:41.130614 2664 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:21:41.134838 kubelet[2664]: I1101 00:21:41.130728 2664 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:21:41.134838 kubelet[2664]: I1101 00:21:41.131767 2664 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:21:41.134838 kubelet[2664]: E1101 00:21:41.132177 2664 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f16f13e513\" not found" Nov 1 00:21:41.134838 kubelet[2664]: I1101 00:21:41.133097 2664 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:21:41.134838 kubelet[2664]: I1101 00:21:41.134710 2664 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:21:41.142307 kubelet[2664]: I1101 00:21:41.140006 2664 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:21:41.144402 kubelet[2664]: I1101 00:21:41.143214 2664 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:21:41.145168 kubelet[2664]: I1101 00:21:41.144594 2664 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:21:41.145168 kubelet[2664]: I1101 00:21:41.144931 2664 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:21:41.151328 kubelet[2664]: I1101 00:21:41.151175 2664 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:21:41.159649 kubelet[2664]: I1101 00:21:41.159583 2664 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:21:41.171287 kubelet[2664]: I1101 00:21:41.170896 2664 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:21:41.171512 kubelet[2664]: I1101 00:21:41.171405 2664 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:21:41.174060 kubelet[2664]: I1101 00:21:41.173917 2664 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:21:41.174060 kubelet[2664]: I1101 00:21:41.173960 2664 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:21:41.174060 kubelet[2664]: I1101 00:21:41.173985 2664 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:21:41.174060 kubelet[2664]: I1101 00:21:41.173992 2664 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:21:41.174060 kubelet[2664]: E1101 00:21:41.174047 2664 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:21:41.276471 kubelet[2664]: E1101 00:21:41.275256 2664 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:21:41.296918 kubelet[2664]: I1101 00:21:41.296886 2664 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:21:41.298514 kubelet[2664]: I1101 00:21:41.297140 2664 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:21:41.298514 kubelet[2664]: I1101 00:21:41.297170 2664 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:21:41.298514 kubelet[2664]: I1101 00:21:41.298465 2664 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:21:41.298514 kubelet[2664]: I1101 00:21:41.298499 2664 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:21:41.298829 kubelet[2664]: I1101 00:21:41.298535 2664 policy_none.go:49] "None policy: Start" Nov 1 00:21:41.298829 kubelet[2664]: I1101 00:21:41.298548 2664 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:21:41.298829 kubelet[2664]: I1101 00:21:41.298568 2664 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:21:41.298829 kubelet[2664]: I1101 00:21:41.298724 2664 state_mem.go:75] "Updated machine memory state" Nov 1 00:21:41.305025 kubelet[2664]: I1101 00:21:41.303969 2664 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:21:41.305025 kubelet[2664]: I1101 00:21:41.304213 2664 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:21:41.305025 kubelet[2664]: I1101 00:21:41.304228 2664 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:21:41.309429 kubelet[2664]: I1101 00:21:41.307881 2664 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:21:41.309890 kubelet[2664]: E1101 00:21:41.309869 2664 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:21:41.414680 kubelet[2664]: I1101 00:21:41.414648 2664 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:41.427606 kubelet[2664]: I1101 00:21:41.427318 2664 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:41.427606 kubelet[2664]: I1101 00:21:41.427571 2664 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-f16f13e513" Nov 1 00:21:41.478605 kubelet[2664]: I1101 00:21:41.478363 2664 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:41.480470 kubelet[2664]: I1101 00:21:41.479960 2664 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:41.480470 kubelet[2664]: I1101 00:21:41.480326 2664 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:41.490610 kubelet[2664]: W1101 00:21:41.490386 2664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:21:41.499488 kubelet[2664]: W1101 00:21:41.499405 2664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:21:41.500539 kubelet[2664]: W1101 00:21:41.499588 2664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:21:41.501107 kubelet[2664]: E1101 00:21:41.500899 2664 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-f16f13e513\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:41.537879 kubelet[2664]: I1101 00:21:41.537609 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1bef007f0a3127fa2613097fd6b71668-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-f16f13e513\" (UID: \"1bef007f0a3127fa2613097fd6b71668\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:41.537879 kubelet[2664]: I1101 00:21:41.537696 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08b7850b5a1a8dcff0f462918a61c2bf-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-f16f13e513\" (UID: \"08b7850b5a1a8dcff0f462918a61c2bf\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:41.537879 kubelet[2664]: I1101 00:21:41.537746 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08b7850b5a1a8dcff0f462918a61c2bf-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-f16f13e513\" (UID: \"08b7850b5a1a8dcff0f462918a61c2bf\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:41.537879 kubelet[2664]: I1101 00:21:41.537776 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08b7850b5a1a8dcff0f462918a61c2bf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-f16f13e513\" (UID: \"08b7850b5a1a8dcff0f462918a61c2bf\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:41.537879 kubelet[2664]: I1101 00:21:41.537805 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1bef007f0a3127fa2613097fd6b71668-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-f16f13e513\" (UID: \"1bef007f0a3127fa2613097fd6b71668\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:41.538327 kubelet[2664]: I1101 00:21:41.537845 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1bef007f0a3127fa2613097fd6b71668-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-f16f13e513\" (UID: \"1bef007f0a3127fa2613097fd6b71668\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:41.538327 kubelet[2664]: I1101 00:21:41.537886 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/08b7850b5a1a8dcff0f462918a61c2bf-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-f16f13e513\" (UID: \"08b7850b5a1a8dcff0f462918a61c2bf\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:41.538327 kubelet[2664]: I1101 00:21:41.537914 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/08b7850b5a1a8dcff0f462918a61c2bf-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-f16f13e513\" (UID: \"08b7850b5a1a8dcff0f462918a61c2bf\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:41.538327 kubelet[2664]: I1101 00:21:41.537944 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e11f132adaf5a9c6eaa1f76daf3f9733-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-f16f13e513\" (UID: \"e11f132adaf5a9c6eaa1f76daf3f9733\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:41.792477 kubelet[2664]: E1101 00:21:41.791538 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:41.801076 kubelet[2664]: E1101 00:21:41.801010 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:41.803905 kubelet[2664]: E1101 00:21:41.803823 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:42.107405 kubelet[2664]: I1101 00:21:42.106813 2664 apiserver.go:52] "Watching apiserver" Nov 1 00:21:42.133826 kubelet[2664]: I1101 00:21:42.133681 2664 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:21:42.214583 kubelet[2664]: I1101 00:21:42.214489 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f16f13e513" podStartSLOduration=2.214464249 podStartE2EDuration="2.214464249s" podCreationTimestamp="2025-11-01 00:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:21:42.207469984 +0000 UTC m=+1.254947272" watchObservedRunningTime="2025-11-01 00:21:42.214464249 +0000 UTC m=+1.261941530" Nov 1 00:21:42.237926 kubelet[2664]: E1101 00:21:42.236256 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:42.237926 kubelet[2664]: I1101 00:21:42.236866 2664 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:42.240433 kubelet[2664]: E1101 00:21:42.238924 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:42.249233 kubelet[2664]: I1101 00:21:42.247768 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f16f13e513" podStartSLOduration=1.247745394 podStartE2EDuration="1.247745394s" podCreationTimestamp="2025-11-01 00:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:21:42.226950473 +0000 UTC m=+1.274427756" watchObservedRunningTime="2025-11-01 00:21:42.247745394 +0000 UTC m=+1.295222675" Nov 1 00:21:42.253009 kubelet[2664]: W1101 00:21:42.252415 2664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:21:42.253009 kubelet[2664]: E1101 00:21:42.252658 2664 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-f16f13e513\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f16f13e513" Nov 1 00:21:42.254316 kubelet[2664]: E1101 00:21:42.253736 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:42.268511 kubelet[2664]: I1101 00:21:42.268321 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f16f13e513" podStartSLOduration=1.268299271 podStartE2EDuration="1.268299271s" podCreationTimestamp="2025-11-01 00:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:21:42.248026896 +0000 UTC m=+1.295504177" watchObservedRunningTime="2025-11-01 00:21:42.268299271 +0000 UTC m=+1.315776529" Nov 1 00:21:43.238874 kubelet[2664]: E1101 00:21:43.238817 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:43.240699 kubelet[2664]: E1101 00:21:43.240464 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:43.241289 kubelet[2664]: E1101 00:21:43.241262 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:44.240405 kubelet[2664]: E1101 00:21:44.239924 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:45.686341 kubelet[2664]: I1101 00:21:45.686294 2664 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:21:45.687580 kubelet[2664]: I1101 00:21:45.687225 2664 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:21:45.687664 containerd[1578]: time="2025-11-01T00:21:45.686966578Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:21:46.289273 kubelet[2664]: E1101 00:21:46.289169 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:46.782927 kubelet[2664]: I1101 00:21:46.781659 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40d06c81-5142-492f-91fb-9e1c3fe7b9ad-lib-modules\") pod \"kube-proxy-tf2rl\" (UID: \"40d06c81-5142-492f-91fb-9e1c3fe7b9ad\") " pod="kube-system/kube-proxy-tf2rl" Nov 1 00:21:46.782927 kubelet[2664]: I1101 00:21:46.781731 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/40d06c81-5142-492f-91fb-9e1c3fe7b9ad-kube-proxy\") pod \"kube-proxy-tf2rl\" (UID: \"40d06c81-5142-492f-91fb-9e1c3fe7b9ad\") " pod="kube-system/kube-proxy-tf2rl" Nov 1 00:21:46.782927 kubelet[2664]: I1101 00:21:46.781773 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40d06c81-5142-492f-91fb-9e1c3fe7b9ad-xtables-lock\") pod \"kube-proxy-tf2rl\" (UID: \"40d06c81-5142-492f-91fb-9e1c3fe7b9ad\") " pod="kube-system/kube-proxy-tf2rl" Nov 1 00:21:46.782927 kubelet[2664]: I1101 00:21:46.781801 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8dfd\" (UniqueName: \"kubernetes.io/projected/40d06c81-5142-492f-91fb-9e1c3fe7b9ad-kube-api-access-h8dfd\") pod \"kube-proxy-tf2rl\" (UID: \"40d06c81-5142-492f-91fb-9e1c3fe7b9ad\") " pod="kube-system/kube-proxy-tf2rl" Nov 1 00:21:46.883817 kubelet[2664]: I1101 00:21:46.883059 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hnkd\" (UniqueName: \"kubernetes.io/projected/e7df8a73-c7f0-41bb-9910-8724d4dfe1d8-kube-api-access-6hnkd\") pod \"tigera-operator-7dcd859c48-7jsxd\" (UID: \"e7df8a73-c7f0-41bb-9910-8724d4dfe1d8\") " pod="tigera-operator/tigera-operator-7dcd859c48-7jsxd" Nov 1 00:21:46.883817 kubelet[2664]: I1101 00:21:46.883226 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e7df8a73-c7f0-41bb-9910-8724d4dfe1d8-var-lib-calico\") pod \"tigera-operator-7dcd859c48-7jsxd\" (UID: \"e7df8a73-c7f0-41bb-9910-8724d4dfe1d8\") " pod="tigera-operator/tigera-operator-7dcd859c48-7jsxd" Nov 1 00:21:47.001531 kubelet[2664]: E1101 00:21:47.001445 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:47.006272 containerd[1578]: time="2025-11-01T00:21:47.004148238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tf2rl,Uid:40d06c81-5142-492f-91fb-9e1c3fe7b9ad,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:47.062954 containerd[1578]: time="2025-11-01T00:21:47.056096746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:47.062954 containerd[1578]: time="2025-11-01T00:21:47.056209302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:47.062954 containerd[1578]: time="2025-11-01T00:21:47.056245838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:47.062954 containerd[1578]: time="2025-11-01T00:21:47.057411963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:47.133092 containerd[1578]: time="2025-11-01T00:21:47.133031739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tf2rl,Uid:40d06c81-5142-492f-91fb-9e1c3fe7b9ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"cec87c06b0ffa8b44aceba39c8df01571e863f28b547938c3589aaa665fe5011\"" Nov 1 00:21:47.134599 kubelet[2664]: E1101 00:21:47.134567 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:47.143413 containerd[1578]: time="2025-11-01T00:21:47.140577433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-7jsxd,Uid:e7df8a73-c7f0-41bb-9910-8724d4dfe1d8,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:21:47.143413 containerd[1578]: time="2025-11-01T00:21:47.140740149Z" level=info msg="CreateContainer within sandbox \"cec87c06b0ffa8b44aceba39c8df01571e863f28b547938c3589aaa665fe5011\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:21:47.177330 containerd[1578]: time="2025-11-01T00:21:47.177138114Z" level=info msg="CreateContainer within sandbox \"cec87c06b0ffa8b44aceba39c8df01571e863f28b547938c3589aaa665fe5011\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ac8e8cad99e7e7a487210ea5856372638a637812e2ce7e9e2f7f574c29fa8df1\"" Nov 1 00:21:47.181794 containerd[1578]: time="2025-11-01T00:21:47.180889851Z" level=info msg="StartContainer for \"ac8e8cad99e7e7a487210ea5856372638a637812e2ce7e9e2f7f574c29fa8df1\"" Nov 1 00:21:47.215500 containerd[1578]: time="2025-11-01T00:21:47.213008683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:47.215500 containerd[1578]: time="2025-11-01T00:21:47.213121061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:47.215500 containerd[1578]: time="2025-11-01T00:21:47.213147132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:47.215500 containerd[1578]: time="2025-11-01T00:21:47.213340660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:47.254977 kubelet[2664]: E1101 00:21:47.254933 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:47.316311 containerd[1578]: time="2025-11-01T00:21:47.315616797Z" level=info msg="StartContainer for \"ac8e8cad99e7e7a487210ea5856372638a637812e2ce7e9e2f7f574c29fa8df1\" returns successfully" Nov 1 00:21:47.338672 containerd[1578]: time="2025-11-01T00:21:47.338626528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-7jsxd,Uid:e7df8a73-c7f0-41bb-9910-8724d4dfe1d8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"84dfc040e7e623f8c55699ba9ea456a7338b01e67e470bf43bdad1fe1ae11d8f\"" Nov 1 00:21:47.345436 containerd[1578]: time="2025-11-01T00:21:47.345364144Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:21:48.259081 kubelet[2664]: E1101 00:21:48.258984 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:48.749576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2906531174.mount: Deactivated successfully. Nov 1 00:21:49.263623 kubelet[2664]: E1101 00:21:49.263101 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:49.392818 containerd[1578]: time="2025-11-01T00:21:49.392713907Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:49.393606 containerd[1578]: time="2025-11-01T00:21:49.393546656Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 00:21:49.397155 containerd[1578]: time="2025-11-01T00:21:49.395757519Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:49.397155 containerd[1578]: time="2025-11-01T00:21:49.396657225Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.051221548s" Nov 1 00:21:49.397155 containerd[1578]: time="2025-11-01T00:21:49.396698016Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:21:49.397469 containerd[1578]: time="2025-11-01T00:21:49.397380697Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:49.400018 containerd[1578]: time="2025-11-01T00:21:49.399984482Z" level=info msg="CreateContainer within sandbox \"84dfc040e7e623f8c55699ba9ea456a7338b01e67e470bf43bdad1fe1ae11d8f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:21:49.411650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2686401535.mount: Deactivated successfully. Nov 1 00:21:49.414668 containerd[1578]: time="2025-11-01T00:21:49.414602471Z" level=info msg="CreateContainer within sandbox \"84dfc040e7e623f8c55699ba9ea456a7338b01e67e470bf43bdad1fe1ae11d8f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"27d0a99ba24b01f3f1f4452d9fc3564a39438dd25ff455c2513c48937685c22d\"" Nov 1 00:21:49.415406 containerd[1578]: time="2025-11-01T00:21:49.415304068Z" level=info msg="StartContainer for \"27d0a99ba24b01f3f1f4452d9fc3564a39438dd25ff455c2513c48937685c22d\"" Nov 1 00:21:49.487424 containerd[1578]: time="2025-11-01T00:21:49.487349388Z" level=info msg="StartContainer for \"27d0a99ba24b01f3f1f4452d9fc3564a39438dd25ff455c2513c48937685c22d\" returns successfully" Nov 1 00:21:50.280345 kubelet[2664]: I1101 00:21:50.279451 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tf2rl" podStartSLOduration=4.279422352 podStartE2EDuration="4.279422352s" podCreationTimestamp="2025-11-01 00:21:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:21:48.270755126 +0000 UTC m=+7.318232406" watchObservedRunningTime="2025-11-01 00:21:50.279422352 +0000 UTC m=+9.326899632" Nov 1 00:21:53.112414 kubelet[2664]: E1101 00:21:53.111077 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:53.260026 kubelet[2664]: I1101 00:21:53.258524 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-7jsxd" podStartSLOduration=5.204225663 podStartE2EDuration="7.258505027s" podCreationTimestamp="2025-11-01 00:21:46 +0000 UTC" firstStartedPulling="2025-11-01 00:21:47.34423412 +0000 UTC m=+6.391711389" lastFinishedPulling="2025-11-01 00:21:49.398513495 +0000 UTC m=+8.445990753" observedRunningTime="2025-11-01 00:21:50.280888047 +0000 UTC m=+9.328365328" watchObservedRunningTime="2025-11-01 00:21:53.258505027 +0000 UTC m=+12.305982307" Nov 1 00:21:53.634112 kubelet[2664]: E1101 00:21:53.633652 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:54.283002 kubelet[2664]: E1101 00:21:54.282953 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:21:56.525585 update_engine[1560]: I20251101 00:21:56.525444 1560 update_attempter.cc:509] Updating boot flags... Nov 1 00:21:56.600442 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3031) Nov 1 00:21:57.279101 sudo[1791]: pam_unix(sudo:session): session closed for user root Nov 1 00:21:57.287265 sshd[1787]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:57.302711 systemd[1]: sshd@6-165.232.144.31:22-139.178.68.195:52200.service: Deactivated successfully. Nov 1 00:21:57.318256 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:21:57.328116 systemd-logind[1552]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:21:57.333572 systemd-logind[1552]: Removed session 7. Nov 1 00:22:04.517433 kubelet[2664]: I1101 00:22:04.517178 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd20c787-b30e-4174-b76a-650b20b7603b-tigera-ca-bundle\") pod \"calico-typha-764c88cd5d-fx6xm\" (UID: \"dd20c787-b30e-4174-b76a-650b20b7603b\") " pod="calico-system/calico-typha-764c88cd5d-fx6xm" Nov 1 00:22:04.517433 kubelet[2664]: I1101 00:22:04.517262 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dd20c787-b30e-4174-b76a-650b20b7603b-typha-certs\") pod \"calico-typha-764c88cd5d-fx6xm\" (UID: \"dd20c787-b30e-4174-b76a-650b20b7603b\") " pod="calico-system/calico-typha-764c88cd5d-fx6xm" Nov 1 00:22:04.517433 kubelet[2664]: I1101 00:22:04.517302 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc54j\" (UniqueName: \"kubernetes.io/projected/dd20c787-b30e-4174-b76a-650b20b7603b-kube-api-access-nc54j\") pod \"calico-typha-764c88cd5d-fx6xm\" (UID: \"dd20c787-b30e-4174-b76a-650b20b7603b\") " pod="calico-system/calico-typha-764c88cd5d-fx6xm" Nov 1 00:22:04.618176 kubelet[2664]: I1101 00:22:04.618105 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0684b644-3b53-498a-8361-4c16600ee9b5-cni-bin-dir\") pod \"calico-node-9gxmj\" (UID: \"0684b644-3b53-498a-8361-4c16600ee9b5\") " pod="calico-system/calico-node-9gxmj" Nov 1 00:22:04.618176 kubelet[2664]: I1101 00:22:04.618149 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0684b644-3b53-498a-8361-4c16600ee9b5-node-certs\") pod \"calico-node-9gxmj\" (UID: \"0684b644-3b53-498a-8361-4c16600ee9b5\") " pod="calico-system/calico-node-9gxmj" Nov 1 00:22:04.618176 kubelet[2664]: I1101 00:22:04.618192 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0684b644-3b53-498a-8361-4c16600ee9b5-cni-log-dir\") pod \"calico-node-9gxmj\" (UID: \"0684b644-3b53-498a-8361-4c16600ee9b5\") " pod="calico-system/calico-node-9gxmj" Nov 1 00:22:04.618436 kubelet[2664]: I1101 00:22:04.618213 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0684b644-3b53-498a-8361-4c16600ee9b5-lib-modules\") pod \"calico-node-9gxmj\" (UID: \"0684b644-3b53-498a-8361-4c16600ee9b5\") " pod="calico-system/calico-node-9gxmj" Nov 1 00:22:04.618436 kubelet[2664]: I1101 00:22:04.618234 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0684b644-3b53-498a-8361-4c16600ee9b5-var-lib-calico\") pod \"calico-node-9gxmj\" (UID: \"0684b644-3b53-498a-8361-4c16600ee9b5\") " pod="calico-system/calico-node-9gxmj" Nov 1 00:22:04.618436 kubelet[2664]: I1101 00:22:04.618264 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0684b644-3b53-498a-8361-4c16600ee9b5-cni-net-dir\") pod \"calico-node-9gxmj\" (UID: \"0684b644-3b53-498a-8361-4c16600ee9b5\") " pod="calico-system/calico-node-9gxmj" Nov 1 00:22:04.618436 kubelet[2664]: I1101 00:22:04.618285 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0684b644-3b53-498a-8361-4c16600ee9b5-flexvol-driver-host\") pod \"calico-node-9gxmj\" (UID: \"0684b644-3b53-498a-8361-4c16600ee9b5\") " pod="calico-system/calico-node-9gxmj" Nov 1 00:22:04.618436 kubelet[2664]: I1101 00:22:04.618304 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0684b644-3b53-498a-8361-4c16600ee9b5-var-run-calico\") pod \"calico-node-9gxmj\" (UID: \"0684b644-3b53-498a-8361-4c16600ee9b5\") " pod="calico-system/calico-node-9gxmj" Nov 1 00:22:04.618753 kubelet[2664]: I1101 00:22:04.618325 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0684b644-3b53-498a-8361-4c16600ee9b5-tigera-ca-bundle\") pod \"calico-node-9gxmj\" (UID: \"0684b644-3b53-498a-8361-4c16600ee9b5\") " pod="calico-system/calico-node-9gxmj" Nov 1 00:22:04.618753 kubelet[2664]: I1101 00:22:04.618342 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0684b644-3b53-498a-8361-4c16600ee9b5-xtables-lock\") pod \"calico-node-9gxmj\" (UID: \"0684b644-3b53-498a-8361-4c16600ee9b5\") " pod="calico-system/calico-node-9gxmj" Nov 1 00:22:04.620056 kubelet[2664]: I1101 00:22:04.620018 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2z52\" (UniqueName: \"kubernetes.io/projected/0684b644-3b53-498a-8361-4c16600ee9b5-kube-api-access-v2z52\") pod \"calico-node-9gxmj\" (UID: \"0684b644-3b53-498a-8361-4c16600ee9b5\") " pod="calico-system/calico-node-9gxmj" Nov 1 00:22:04.620199 kubelet[2664]: I1101 00:22:04.620139 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0684b644-3b53-498a-8361-4c16600ee9b5-policysync\") pod \"calico-node-9gxmj\" (UID: \"0684b644-3b53-498a-8361-4c16600ee9b5\") " pod="calico-system/calico-node-9gxmj" Nov 1 00:22:04.684519 kubelet[2664]: E1101 00:22:04.683416 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mglhw" podUID="c76f0dc0-2591-4062-8741-1604477875d5" Nov 1 00:22:04.726534 kubelet[2664]: I1101 00:22:04.721619 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c76f0dc0-2591-4062-8741-1604477875d5-socket-dir\") pod \"csi-node-driver-mglhw\" (UID: \"c76f0dc0-2591-4062-8741-1604477875d5\") " pod="calico-system/csi-node-driver-mglhw" Nov 1 00:22:04.726534 kubelet[2664]: I1101 00:22:04.721708 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c76f0dc0-2591-4062-8741-1604477875d5-varrun\") pod \"csi-node-driver-mglhw\" (UID: \"c76f0dc0-2591-4062-8741-1604477875d5\") " pod="calico-system/csi-node-driver-mglhw" Nov 1 00:22:04.726534 kubelet[2664]: I1101 00:22:04.721775 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c76f0dc0-2591-4062-8741-1604477875d5-registration-dir\") pod \"csi-node-driver-mglhw\" (UID: \"c76f0dc0-2591-4062-8741-1604477875d5\") " pod="calico-system/csi-node-driver-mglhw" Nov 1 00:22:04.726534 kubelet[2664]: I1101 00:22:04.721801 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp8fz\" (UniqueName: \"kubernetes.io/projected/c76f0dc0-2591-4062-8741-1604477875d5-kube-api-access-cp8fz\") pod \"csi-node-driver-mglhw\" (UID: \"c76f0dc0-2591-4062-8741-1604477875d5\") " pod="calico-system/csi-node-driver-mglhw" Nov 1 00:22:04.726534 kubelet[2664]: I1101 00:22:04.721821 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c76f0dc0-2591-4062-8741-1604477875d5-kubelet-dir\") pod \"csi-node-driver-mglhw\" (UID: \"c76f0dc0-2591-4062-8741-1604477875d5\") " pod="calico-system/csi-node-driver-mglhw" Nov 1 00:22:04.760792 kubelet[2664]: E1101 00:22:04.756123 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:04.767874 containerd[1578]: time="2025-11-01T00:22:04.766165343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-764c88cd5d-fx6xm,Uid:dd20c787-b30e-4174-b76a-650b20b7603b,Namespace:calico-system,Attempt:0,}" Nov 1 00:22:04.772120 kubelet[2664]: E1101 00:22:04.770881 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.772594 kubelet[2664]: W1101 00:22:04.772551 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.774581 kubelet[2664]: E1101 00:22:04.773866 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.823420 kubelet[2664]: E1101 00:22:04.823369 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.823729 kubelet[2664]: W1101 00:22:04.823639 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.823729 kubelet[2664]: E1101 00:22:04.823700 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.824301 kubelet[2664]: E1101 00:22:04.824231 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.824301 kubelet[2664]: W1101 00:22:04.824252 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.824954 kubelet[2664]: E1101 00:22:04.824841 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.824954 kubelet[2664]: W1101 00:22:04.824856 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.824954 kubelet[2664]: E1101 00:22:04.824873 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.825507 kubelet[2664]: E1101 00:22:04.825283 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.825507 kubelet[2664]: W1101 00:22:04.825301 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.825507 kubelet[2664]: E1101 00:22:04.825325 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.825797 kubelet[2664]: E1101 00:22:04.825681 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.825797 kubelet[2664]: W1101 00:22:04.825695 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.825797 kubelet[2664]: E1101 00:22:04.825708 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.826165 kubelet[2664]: E1101 00:22:04.826029 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.826423 kubelet[2664]: E1101 00:22:04.826312 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.826423 kubelet[2664]: W1101 00:22:04.826330 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.826423 kubelet[2664]: E1101 00:22:04.826362 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.827089 kubelet[2664]: E1101 00:22:04.826948 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.827089 kubelet[2664]: W1101 00:22:04.826962 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.827089 kubelet[2664]: E1101 00:22:04.826999 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.827683 kubelet[2664]: E1101 00:22:04.827526 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.827683 kubelet[2664]: W1101 00:22:04.827538 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.827683 kubelet[2664]: E1101 00:22:04.827559 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.828098 kubelet[2664]: E1101 00:22:04.827996 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.828098 kubelet[2664]: W1101 00:22:04.828012 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.828098 kubelet[2664]: E1101 00:22:04.828041 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.828580 kubelet[2664]: E1101 00:22:04.828419 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.828580 kubelet[2664]: W1101 00:22:04.828430 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.828580 kubelet[2664]: E1101 00:22:04.828450 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.829124 kubelet[2664]: E1101 00:22:04.828971 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.829124 kubelet[2664]: W1101 00:22:04.828990 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.829124 kubelet[2664]: E1101 00:22:04.829076 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.829762 kubelet[2664]: E1101 00:22:04.829599 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.829762 kubelet[2664]: W1101 00:22:04.829611 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.829762 kubelet[2664]: E1101 00:22:04.829708 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.830094 kubelet[2664]: E1101 00:22:04.829993 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.830094 kubelet[2664]: W1101 00:22:04.830003 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.830094 kubelet[2664]: E1101 00:22:04.830026 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.830648 kubelet[2664]: E1101 00:22:04.830457 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.830648 kubelet[2664]: W1101 00:22:04.830470 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.830648 kubelet[2664]: E1101 00:22:04.830491 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.830966 kubelet[2664]: E1101 00:22:04.830824 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.830966 kubelet[2664]: W1101 00:22:04.830838 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.830966 kubelet[2664]: E1101 00:22:04.830863 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.831734 kubelet[2664]: E1101 00:22:04.831317 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.831734 kubelet[2664]: W1101 00:22:04.831329 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.831734 kubelet[2664]: E1101 00:22:04.831443 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.832086 kubelet[2664]: E1101 00:22:04.832073 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.832159 kubelet[2664]: W1101 00:22:04.832149 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.832279 kubelet[2664]: E1101 00:22:04.832269 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.832602 kubelet[2664]: E1101 00:22:04.832589 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.832748 kubelet[2664]: W1101 00:22:04.832670 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.832748 kubelet[2664]: E1101 00:22:04.832698 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.833077 kubelet[2664]: E1101 00:22:04.832976 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.833077 kubelet[2664]: W1101 00:22:04.832987 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.833077 kubelet[2664]: E1101 00:22:04.833001 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.833955 kubelet[2664]: E1101 00:22:04.833496 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:04.834242 kubelet[2664]: E1101 00:22:04.834229 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.834331 kubelet[2664]: W1101 00:22:04.834320 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.834569 kubelet[2664]: E1101 00:22:04.834555 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.835658 containerd[1578]: time="2025-11-01T00:22:04.835019339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9gxmj,Uid:0684b644-3b53-498a-8361-4c16600ee9b5,Namespace:calico-system,Attempt:0,}" Nov 1 00:22:04.836264 kubelet[2664]: E1101 00:22:04.835925 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.836264 kubelet[2664]: W1101 00:22:04.835939 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.836264 kubelet[2664]: E1101 00:22:04.836018 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.836544 kubelet[2664]: E1101 00:22:04.836477 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.836751 kubelet[2664]: W1101 00:22:04.836739 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.836895 kubelet[2664]: E1101 00:22:04.836823 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.837129 kubelet[2664]: E1101 00:22:04.837117 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.837204 kubelet[2664]: W1101 00:22:04.837191 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.837572 kubelet[2664]: E1101 00:22:04.837556 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.837915 kubelet[2664]: E1101 00:22:04.837898 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.838028 kubelet[2664]: W1101 00:22:04.838012 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.838214 kubelet[2664]: E1101 00:22:04.838097 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.838639 kubelet[2664]: E1101 00:22:04.838626 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.838903 kubelet[2664]: W1101 00:22:04.838760 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.838903 kubelet[2664]: E1101 00:22:04.838780 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.875906 kubelet[2664]: E1101 00:22:04.875848 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:04.875906 kubelet[2664]: W1101 00:22:04.875897 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:04.876122 kubelet[2664]: E1101 00:22:04.875947 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:04.879882 containerd[1578]: time="2025-11-01T00:22:04.878360836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:04.879882 containerd[1578]: time="2025-11-01T00:22:04.879824665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:04.879882 containerd[1578]: time="2025-11-01T00:22:04.879839709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:04.880928 containerd[1578]: time="2025-11-01T00:22:04.879987891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:04.922513 containerd[1578]: time="2025-11-01T00:22:04.921800317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:04.922513 containerd[1578]: time="2025-11-01T00:22:04.921982875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:04.922513 containerd[1578]: time="2025-11-01T00:22:04.922040803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:04.924925 containerd[1578]: time="2025-11-01T00:22:04.924827736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:05.163684 containerd[1578]: time="2025-11-01T00:22:05.163624398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-764c88cd5d-fx6xm,Uid:dd20c787-b30e-4174-b76a-650b20b7603b,Namespace:calico-system,Attempt:0,} returns sandbox id \"74a2558d729a87a93e1644f3d8023806654bf9fb8f5b148fe9c32dc41e267667\"" Nov 1 00:22:05.170440 kubelet[2664]: E1101 00:22:05.167903 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:05.175415 containerd[1578]: time="2025-11-01T00:22:05.175325626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:22:05.249770 containerd[1578]: time="2025-11-01T00:22:05.249689126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9gxmj,Uid:0684b644-3b53-498a-8361-4c16600ee9b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"2cdc6fc5598a55d07fad246bb53e1ed028b7d3fc313375e688f643e82da67b62\"" Nov 1 00:22:05.252343 kubelet[2664]: E1101 00:22:05.252293 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:06.175240 kubelet[2664]: E1101 00:22:06.175147 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mglhw" podUID="c76f0dc0-2591-4062-8741-1604477875d5" Nov 1 00:22:06.582122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3656556828.mount: Deactivated successfully. Nov 1 00:22:07.941704 containerd[1578]: time="2025-11-01T00:22:07.941551467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:07.943044 containerd[1578]: time="2025-11-01T00:22:07.942965824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 00:22:07.944202 containerd[1578]: time="2025-11-01T00:22:07.943853997Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:07.946556 containerd[1578]: time="2025-11-01T00:22:07.946502688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:07.947522 containerd[1578]: time="2025-11-01T00:22:07.947489260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.772091958s" Nov 1 00:22:07.947646 containerd[1578]: time="2025-11-01T00:22:07.947630757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:22:07.950226 containerd[1578]: time="2025-11-01T00:22:07.949570029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:22:07.974056 containerd[1578]: time="2025-11-01T00:22:07.973994008Z" level=info msg="CreateContainer within sandbox \"74a2558d729a87a93e1644f3d8023806654bf9fb8f5b148fe9c32dc41e267667\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:22:08.005484 containerd[1578]: time="2025-11-01T00:22:08.002837005Z" level=info msg="CreateContainer within sandbox \"74a2558d729a87a93e1644f3d8023806654bf9fb8f5b148fe9c32dc41e267667\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0bde83ec5bd9ee466d13ec94a59c02881c9944de8bbca3dde4da5d44233848c8\"" Nov 1 00:22:08.010060 containerd[1578]: time="2025-11-01T00:22:08.006175984Z" level=info msg="StartContainer for \"0bde83ec5bd9ee466d13ec94a59c02881c9944de8bbca3dde4da5d44233848c8\"" Nov 1 00:22:08.154416 containerd[1578]: time="2025-11-01T00:22:08.153245188Z" level=info msg="StartContainer for \"0bde83ec5bd9ee466d13ec94a59c02881c9944de8bbca3dde4da5d44233848c8\" returns successfully" Nov 1 00:22:08.182092 kubelet[2664]: E1101 00:22:08.182036 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mglhw" podUID="c76f0dc0-2591-4062-8741-1604477875d5" Nov 1 00:22:08.375884 kubelet[2664]: E1101 00:22:08.375732 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:08.394850 kubelet[2664]: I1101 00:22:08.394529 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-764c88cd5d-fx6xm" podStartSLOduration=1.618070991 podStartE2EDuration="4.392262477s" podCreationTimestamp="2025-11-01 00:22:04 +0000 UTC" firstStartedPulling="2025-11-01 00:22:05.174759423 +0000 UTC m=+24.222236691" lastFinishedPulling="2025-11-01 00:22:07.948950905 +0000 UTC m=+26.996428177" observedRunningTime="2025-11-01 00:22:08.391701956 +0000 UTC m=+27.439179236" watchObservedRunningTime="2025-11-01 00:22:08.392262477 +0000 UTC m=+27.439739756" Nov 1 00:22:08.422672 kubelet[2664]: E1101 00:22:08.422590 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.422672 kubelet[2664]: W1101 00:22:08.422635 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.423244 kubelet[2664]: E1101 00:22:08.423037 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.423581 kubelet[2664]: E1101 00:22:08.423566 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.423728 kubelet[2664]: W1101 00:22:08.423660 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.423728 kubelet[2664]: E1101 00:22:08.423682 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.424998 kubelet[2664]: E1101 00:22:08.424853 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.424998 kubelet[2664]: W1101 00:22:08.424878 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.424998 kubelet[2664]: E1101 00:22:08.424900 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.426355 kubelet[2664]: E1101 00:22:08.425739 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.426355 kubelet[2664]: W1101 00:22:08.425759 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.426355 kubelet[2664]: E1101 00:22:08.425778 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.428413 kubelet[2664]: E1101 00:22:08.427443 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.428413 kubelet[2664]: W1101 00:22:08.427467 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.428413 kubelet[2664]: E1101 00:22:08.427489 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.429280 kubelet[2664]: E1101 00:22:08.429054 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.429280 kubelet[2664]: W1101 00:22:08.429075 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.429280 kubelet[2664]: E1101 00:22:08.429098 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.430411 kubelet[2664]: E1101 00:22:08.430076 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.430411 kubelet[2664]: W1101 00:22:08.430091 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.430411 kubelet[2664]: E1101 00:22:08.430145 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.431582 kubelet[2664]: E1101 00:22:08.431150 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.431582 kubelet[2664]: W1101 00:22:08.431164 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.431582 kubelet[2664]: E1101 00:22:08.431180 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.432832 kubelet[2664]: E1101 00:22:08.432648 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.432832 kubelet[2664]: W1101 00:22:08.432673 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.432832 kubelet[2664]: E1101 00:22:08.432689 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.433639 kubelet[2664]: E1101 00:22:08.433281 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.433639 kubelet[2664]: W1101 00:22:08.433298 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.433639 kubelet[2664]: E1101 00:22:08.433318 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.435752 kubelet[2664]: E1101 00:22:08.435596 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.435752 kubelet[2664]: W1101 00:22:08.435616 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.435752 kubelet[2664]: E1101 00:22:08.435636 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.436438 kubelet[2664]: E1101 00:22:08.436420 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.437051 kubelet[2664]: W1101 00:22:08.436506 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.437051 kubelet[2664]: E1101 00:22:08.436528 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.438783 kubelet[2664]: E1101 00:22:08.438394 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.438783 kubelet[2664]: W1101 00:22:08.438740 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.439188 kubelet[2664]: E1101 00:22:08.438875 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.440875 kubelet[2664]: E1101 00:22:08.440587 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.440875 kubelet[2664]: W1101 00:22:08.440606 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.440875 kubelet[2664]: E1101 00:22:08.440629 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.442052 kubelet[2664]: E1101 00:22:08.441941 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.442052 kubelet[2664]: W1101 00:22:08.441962 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.442052 kubelet[2664]: E1101 00:22:08.441982 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.461429 kubelet[2664]: E1101 00:22:08.461392 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.462528 kubelet[2664]: W1101 00:22:08.462161 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.462528 kubelet[2664]: E1101 00:22:08.462210 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.464405 kubelet[2664]: E1101 00:22:08.463589 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.464405 kubelet[2664]: W1101 00:22:08.464282 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.464405 kubelet[2664]: E1101 00:22:08.464331 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.467157 kubelet[2664]: E1101 00:22:08.467089 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.467157 kubelet[2664]: W1101 00:22:08.467121 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.467157 kubelet[2664]: E1101 00:22:08.467155 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.468608 kubelet[2664]: E1101 00:22:08.468233 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.468608 kubelet[2664]: W1101 00:22:08.468252 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.468608 kubelet[2664]: E1101 00:22:08.468301 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.469508 kubelet[2664]: E1101 00:22:08.469478 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.469508 kubelet[2664]: W1101 00:22:08.469501 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.470378 kubelet[2664]: E1101 00:22:08.469878 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.470378 kubelet[2664]: E1101 00:22:08.470208 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.470378 kubelet[2664]: W1101 00:22:08.470222 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.471876 kubelet[2664]: E1101 00:22:08.471449 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.471876 kubelet[2664]: E1101 00:22:08.471607 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.471876 kubelet[2664]: W1101 00:22:08.471623 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.471876 kubelet[2664]: E1101 00:22:08.471810 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.473307 kubelet[2664]: E1101 00:22:08.472448 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.473307 kubelet[2664]: W1101 00:22:08.472466 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.473307 kubelet[2664]: E1101 00:22:08.472671 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.473307 kubelet[2664]: E1101 00:22:08.472668 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.473307 kubelet[2664]: W1101 00:22:08.472679 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.473307 kubelet[2664]: E1101 00:22:08.472739 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.474998 kubelet[2664]: E1101 00:22:08.474047 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.474998 kubelet[2664]: W1101 00:22:08.474067 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.474998 kubelet[2664]: E1101 00:22:08.474088 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.476670 kubelet[2664]: E1101 00:22:08.476543 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.476670 kubelet[2664]: W1101 00:22:08.476572 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.477026 kubelet[2664]: E1101 00:22:08.476807 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.479140 kubelet[2664]: E1101 00:22:08.479114 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.479426 kubelet[2664]: W1101 00:22:08.479276 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.479538 kubelet[2664]: E1101 00:22:08.479524 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.480326 kubelet[2664]: E1101 00:22:08.480203 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.480326 kubelet[2664]: W1101 00:22:08.480220 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.480326 kubelet[2664]: E1101 00:22:08.480263 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.481188 kubelet[2664]: E1101 00:22:08.480813 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.481188 kubelet[2664]: W1101 00:22:08.480840 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.481188 kubelet[2664]: E1101 00:22:08.480878 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.481654 kubelet[2664]: E1101 00:22:08.481585 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.481654 kubelet[2664]: W1101 00:22:08.481599 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.481654 kubelet[2664]: E1101 00:22:08.481620 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.481917 kubelet[2664]: E1101 00:22:08.481892 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.481917 kubelet[2664]: W1101 00:22:08.481912 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.481972 kubelet[2664]: E1101 00:22:08.481927 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.482891 kubelet[2664]: E1101 00:22:08.482870 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.482891 kubelet[2664]: W1101 00:22:08.482886 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.482993 kubelet[2664]: E1101 00:22:08.482900 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:08.485091 kubelet[2664]: E1101 00:22:08.485057 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:08.485091 kubelet[2664]: W1101 00:22:08.485078 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:08.485226 kubelet[2664]: E1101 00:22:08.485097 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.362637 containerd[1578]: time="2025-11-01T00:22:09.362559962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:09.363914 containerd[1578]: time="2025-11-01T00:22:09.363701445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 00:22:09.364961 containerd[1578]: time="2025-11-01T00:22:09.364555930Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:09.367967 containerd[1578]: time="2025-11-01T00:22:09.367915067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:09.368910 kubelet[2664]: I1101 00:22:09.368881 2664 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:22:09.370307 containerd[1578]: time="2025-11-01T00:22:09.369525805Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.419756905s" Nov 1 00:22:09.370307 containerd[1578]: time="2025-11-01T00:22:09.369701866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:22:09.370631 kubelet[2664]: E1101 00:22:09.369984 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:09.375057 containerd[1578]: time="2025-11-01T00:22:09.374945602Z" level=info msg="CreateContainer within sandbox \"2cdc6fc5598a55d07fad246bb53e1ed028b7d3fc313375e688f643e82da67b62\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:22:09.425452 containerd[1578]: time="2025-11-01T00:22:09.425353109Z" level=info msg="CreateContainer within sandbox \"2cdc6fc5598a55d07fad246bb53e1ed028b7d3fc313375e688f643e82da67b62\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7a85f8e112c579c446ebf2ce08391261b5b899cc49b9e0dd5c22d6fdd9d557de\"" Nov 1 00:22:09.430398 containerd[1578]: time="2025-11-01T00:22:09.429666988Z" level=info msg="StartContainer for \"7a85f8e112c579c446ebf2ce08391261b5b899cc49b9e0dd5c22d6fdd9d557de\"" Nov 1 00:22:09.455767 kubelet[2664]: E1101 00:22:09.453639 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.455767 kubelet[2664]: W1101 00:22:09.453686 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.455767 kubelet[2664]: E1101 00:22:09.453731 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.455767 kubelet[2664]: E1101 00:22:09.455558 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.455767 kubelet[2664]: W1101 00:22:09.455595 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.455767 kubelet[2664]: E1101 00:22:09.455630 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.457124 kubelet[2664]: E1101 00:22:09.457068 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.457124 kubelet[2664]: W1101 00:22:09.457109 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.457402 kubelet[2664]: E1101 00:22:09.457197 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.457759 kubelet[2664]: E1101 00:22:09.457721 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.457759 kubelet[2664]: W1101 00:22:09.457747 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.457899 kubelet[2664]: E1101 00:22:09.457770 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.459578 kubelet[2664]: E1101 00:22:09.459534 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.459578 kubelet[2664]: W1101 00:22:09.459569 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.461297 kubelet[2664]: E1101 00:22:09.459601 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.463861 kubelet[2664]: E1101 00:22:09.463824 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.463861 kubelet[2664]: W1101 00:22:09.463856 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.464191 kubelet[2664]: E1101 00:22:09.463889 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.464820 kubelet[2664]: E1101 00:22:09.464775 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.464820 kubelet[2664]: W1101 00:22:09.464807 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.465246 kubelet[2664]: E1101 00:22:09.464836 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.467303 kubelet[2664]: E1101 00:22:09.465244 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.467303 kubelet[2664]: W1101 00:22:09.465261 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.467303 kubelet[2664]: E1101 00:22:09.465283 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.467303 kubelet[2664]: E1101 00:22:09.467062 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.467303 kubelet[2664]: W1101 00:22:09.467084 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.467303 kubelet[2664]: E1101 00:22:09.467153 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.467898 kubelet[2664]: E1101 00:22:09.467490 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.467898 kubelet[2664]: W1101 00:22:09.467505 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.467898 kubelet[2664]: E1101 00:22:09.467523 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.467898 kubelet[2664]: E1101 00:22:09.467768 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.467898 kubelet[2664]: W1101 00:22:09.467780 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.467898 kubelet[2664]: E1101 00:22:09.467800 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.468312 kubelet[2664]: E1101 00:22:09.468068 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.468312 kubelet[2664]: W1101 00:22:09.468082 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.468312 kubelet[2664]: E1101 00:22:09.468098 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.468618 kubelet[2664]: E1101 00:22:09.468366 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.468618 kubelet[2664]: W1101 00:22:09.468423 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.468618 kubelet[2664]: E1101 00:22:09.468439 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.470106 kubelet[2664]: E1101 00:22:09.470080 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.470106 kubelet[2664]: W1101 00:22:09.470108 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.470278 kubelet[2664]: E1101 00:22:09.470129 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.470484 kubelet[2664]: E1101 00:22:09.470463 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.470572 kubelet[2664]: W1101 00:22:09.470488 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.470572 kubelet[2664]: E1101 00:22:09.470556 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.485762 kubelet[2664]: E1101 00:22:09.484013 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.485762 kubelet[2664]: W1101 00:22:09.484248 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.485762 kubelet[2664]: E1101 00:22:09.484293 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.485762 kubelet[2664]: E1101 00:22:09.485624 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.485762 kubelet[2664]: W1101 00:22:09.485651 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.485762 kubelet[2664]: E1101 00:22:09.485682 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.486327 kubelet[2664]: E1101 00:22:09.486299 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.486327 kubelet[2664]: W1101 00:22:09.486314 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.486558 kubelet[2664]: E1101 00:22:09.486412 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.487041 kubelet[2664]: E1101 00:22:09.486995 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.487041 kubelet[2664]: W1101 00:22:09.487013 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.487253 kubelet[2664]: E1101 00:22:09.487123 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.487322 kubelet[2664]: E1101 00:22:09.487274 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.487322 kubelet[2664]: W1101 00:22:09.487282 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.487322 kubelet[2664]: E1101 00:22:09.487295 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.487735 kubelet[2664]: E1101 00:22:09.487514 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.487735 kubelet[2664]: W1101 00:22:09.487522 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.487735 kubelet[2664]: E1101 00:22:09.487544 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.487735 kubelet[2664]: E1101 00:22:09.487730 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.487735 kubelet[2664]: W1101 00:22:09.487738 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.488323 kubelet[2664]: E1101 00:22:09.487814 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.488323 kubelet[2664]: E1101 00:22:09.487942 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.488323 kubelet[2664]: W1101 00:22:09.487949 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.488323 kubelet[2664]: E1101 00:22:09.488023 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.488323 kubelet[2664]: E1101 00:22:09.488152 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.488323 kubelet[2664]: W1101 00:22:09.488161 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.488323 kubelet[2664]: E1101 00:22:09.488174 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.489070 kubelet[2664]: E1101 00:22:09.488629 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.489070 kubelet[2664]: W1101 00:22:09.488640 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.489809 kubelet[2664]: E1101 00:22:09.489782 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.490675 kubelet[2664]: E1101 00:22:09.490649 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.490675 kubelet[2664]: W1101 00:22:09.490669 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.491009 kubelet[2664]: E1101 00:22:09.490863 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.492141 kubelet[2664]: E1101 00:22:09.492112 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.492141 kubelet[2664]: W1101 00:22:09.492136 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.496599 kubelet[2664]: E1101 00:22:09.496522 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.497415 kubelet[2664]: W1101 00:22:09.496577 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.497415 kubelet[2664]: E1101 00:22:09.497319 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.499576 kubelet[2664]: E1101 00:22:09.499444 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.499576 kubelet[2664]: W1101 00:22:09.499490 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.499576 kubelet[2664]: E1101 00:22:09.499526 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.501801 kubelet[2664]: E1101 00:22:09.501518 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.501801 kubelet[2664]: W1101 00:22:09.501548 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.501801 kubelet[2664]: E1101 00:22:09.501577 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.502312 kubelet[2664]: E1101 00:22:09.502075 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.502908 kubelet[2664]: E1101 00:22:09.502887 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.503253 kubelet[2664]: W1101 00:22:09.502914 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.503253 kubelet[2664]: E1101 00:22:09.502934 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.509316 kubelet[2664]: E1101 00:22:09.509242 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.511349 kubelet[2664]: W1101 00:22:09.510414 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.511349 kubelet[2664]: E1101 00:22:09.510485 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.511349 kubelet[2664]: E1101 00:22:09.510917 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:09.511349 kubelet[2664]: W1101 00:22:09.510931 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:09.511349 kubelet[2664]: E1101 00:22:09.510951 2664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:09.619457 containerd[1578]: time="2025-11-01T00:22:09.619225200Z" level=info msg="StartContainer for \"7a85f8e112c579c446ebf2ce08391261b5b899cc49b9e0dd5c22d6fdd9d557de\" returns successfully" Nov 1 00:22:09.693961 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a85f8e112c579c446ebf2ce08391261b5b899cc49b9e0dd5c22d6fdd9d557de-rootfs.mount: Deactivated successfully. Nov 1 00:22:09.707852 containerd[1578]: time="2025-11-01T00:22:09.693671524Z" level=info msg="shim disconnected" id=7a85f8e112c579c446ebf2ce08391261b5b899cc49b9e0dd5c22d6fdd9d557de namespace=k8s.io Nov 1 00:22:09.707852 containerd[1578]: time="2025-11-01T00:22:09.707845279Z" level=warning msg="cleaning up after shim disconnected" id=7a85f8e112c579c446ebf2ce08391261b5b899cc49b9e0dd5c22d6fdd9d557de namespace=k8s.io Nov 1 00:22:09.707852 containerd[1578]: time="2025-11-01T00:22:09.707864658Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:22:10.175707 kubelet[2664]: E1101 00:22:10.174990 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mglhw" podUID="c76f0dc0-2591-4062-8741-1604477875d5" Nov 1 00:22:10.386814 kubelet[2664]: E1101 00:22:10.386244 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:10.391549 containerd[1578]: time="2025-11-01T00:22:10.390704840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:22:12.174354 kubelet[2664]: E1101 00:22:12.174288 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mglhw" podUID="c76f0dc0-2591-4062-8741-1604477875d5" Nov 1 00:22:12.608886 kubelet[2664]: I1101 00:22:12.608103 2664 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:22:12.610261 kubelet[2664]: E1101 00:22:12.609871 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:13.391581 kubelet[2664]: E1101 00:22:13.391238 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:14.119591 containerd[1578]: time="2025-11-01T00:22:14.119469496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:14.121366 containerd[1578]: time="2025-11-01T00:22:14.121047574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 00:22:14.122441 containerd[1578]: time="2025-11-01T00:22:14.122030088Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:14.126479 containerd[1578]: time="2025-11-01T00:22:14.126355217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:14.127708 containerd[1578]: time="2025-11-01T00:22:14.127645093Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.736166907s" Nov 1 00:22:14.128457 containerd[1578]: time="2025-11-01T00:22:14.127923995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:22:14.143335 containerd[1578]: time="2025-11-01T00:22:14.143072800Z" level=info msg="CreateContainer within sandbox \"2cdc6fc5598a55d07fad246bb53e1ed028b7d3fc313375e688f643e82da67b62\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:22:14.170528 containerd[1578]: time="2025-11-01T00:22:14.170242324Z" level=info msg="CreateContainer within sandbox \"2cdc6fc5598a55d07fad246bb53e1ed028b7d3fc313375e688f643e82da67b62\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"919a345bd0ed740d21beeed614f3c6f64a9671d6cc69d6a07f7304589f6c21de\"" Nov 1 00:22:14.172296 containerd[1578]: time="2025-11-01T00:22:14.172253490Z" level=info msg="StartContainer for \"919a345bd0ed740d21beeed614f3c6f64a9671d6cc69d6a07f7304589f6c21de\"" Nov 1 00:22:14.175837 kubelet[2664]: E1101 00:22:14.174634 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mglhw" podUID="c76f0dc0-2591-4062-8741-1604477875d5" Nov 1 00:22:14.282508 containerd[1578]: time="2025-11-01T00:22:14.281852290Z" level=info msg="StartContainer for \"919a345bd0ed740d21beeed614f3c6f64a9671d6cc69d6a07f7304589f6c21de\" returns successfully" Nov 1 00:22:14.407725 kubelet[2664]: E1101 00:22:14.407490 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:14.950742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-919a345bd0ed740d21beeed614f3c6f64a9671d6cc69d6a07f7304589f6c21de-rootfs.mount: Deactivated successfully. Nov 1 00:22:14.954439 containerd[1578]: time="2025-11-01T00:22:14.952911298Z" level=info msg="shim disconnected" id=919a345bd0ed740d21beeed614f3c6f64a9671d6cc69d6a07f7304589f6c21de namespace=k8s.io Nov 1 00:22:14.954439 containerd[1578]: time="2025-11-01T00:22:14.953039382Z" level=warning msg="cleaning up after shim disconnected" id=919a345bd0ed740d21beeed614f3c6f64a9671d6cc69d6a07f7304589f6c21de namespace=k8s.io Nov 1 00:22:14.954439 containerd[1578]: time="2025-11-01T00:22:14.953053332Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:22:14.981421 kubelet[2664]: I1101 00:22:14.981278 2664 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:22:15.134463 kubelet[2664]: I1101 00:22:15.134401 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl7zj\" (UniqueName: \"kubernetes.io/projected/4937286b-cd30-4d33-95a1-43e7f1688846-kube-api-access-zl7zj\") pod \"whisker-6f9d7d8847-x447l\" (UID: \"4937286b-cd30-4d33-95a1-43e7f1688846\") " pod="calico-system/whisker-6f9d7d8847-x447l" Nov 1 00:22:15.134463 kubelet[2664]: I1101 00:22:15.134462 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/14a0cb3a-c17b-419c-80e4-76ffe3aff4c5-goldmane-key-pair\") pod \"goldmane-666569f655-2zlcz\" (UID: \"14a0cb3a-c17b-419c-80e4-76ffe3aff4c5\") " pod="calico-system/goldmane-666569f655-2zlcz" Nov 1 00:22:15.134755 kubelet[2664]: I1101 00:22:15.134485 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2v7p\" (UniqueName: \"kubernetes.io/projected/9637455a-d2d9-41ac-be89-aeef7331b819-kube-api-access-s2v7p\") pod \"coredns-668d6bf9bc-fxl85\" (UID: \"9637455a-d2d9-41ac-be89-aeef7331b819\") " pod="kube-system/coredns-668d6bf9bc-fxl85" Nov 1 00:22:15.134755 kubelet[2664]: I1101 00:22:15.134528 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8bpr\" (UniqueName: \"kubernetes.io/projected/fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b-kube-api-access-r8bpr\") pod \"calico-apiserver-7d65b76bbf-mht9v\" (UID: \"fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b\") " pod="calico-apiserver/calico-apiserver-7d65b76bbf-mht9v" Nov 1 00:22:15.134755 kubelet[2664]: I1101 00:22:15.134547 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4937286b-cd30-4d33-95a1-43e7f1688846-whisker-backend-key-pair\") pod \"whisker-6f9d7d8847-x447l\" (UID: \"4937286b-cd30-4d33-95a1-43e7f1688846\") " pod="calico-system/whisker-6f9d7d8847-x447l" Nov 1 00:22:15.134755 kubelet[2664]: I1101 00:22:15.134563 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dcb5\" (UniqueName: \"kubernetes.io/projected/945dd47a-80ea-4932-9742-bcde28f179e6-kube-api-access-6dcb5\") pod \"coredns-668d6bf9bc-s6zgv\" (UID: \"945dd47a-80ea-4932-9742-bcde28f179e6\") " pod="kube-system/coredns-668d6bf9bc-s6zgv" Nov 1 00:22:15.134755 kubelet[2664]: I1101 00:22:15.134582 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14a0cb3a-c17b-419c-80e4-76ffe3aff4c5-config\") pod \"goldmane-666569f655-2zlcz\" (UID: \"14a0cb3a-c17b-419c-80e4-76ffe3aff4c5\") " pod="calico-system/goldmane-666569f655-2zlcz" Nov 1 00:22:15.134994 kubelet[2664]: I1101 00:22:15.134601 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70d7c9dc-5ae1-4150-b4ab-1e59c014a05a-tigera-ca-bundle\") pod \"calico-kube-controllers-76d974b5c6-z26qx\" (UID: \"70d7c9dc-5ae1-4150-b4ab-1e59c014a05a\") " pod="calico-system/calico-kube-controllers-76d974b5c6-z26qx" Nov 1 00:22:15.134994 kubelet[2664]: I1101 00:22:15.134626 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spw79\" (UniqueName: \"kubernetes.io/projected/1172650d-8656-4c06-afa1-e156b3ef1286-kube-api-access-spw79\") pod \"calico-apiserver-7d65b76bbf-shhvk\" (UID: \"1172650d-8656-4c06-afa1-e156b3ef1286\") " pod="calico-apiserver/calico-apiserver-7d65b76bbf-shhvk" Nov 1 00:22:15.134994 kubelet[2664]: I1101 00:22:15.134645 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2cp9\" (UniqueName: \"kubernetes.io/projected/14a0cb3a-c17b-419c-80e4-76ffe3aff4c5-kube-api-access-b2cp9\") pod \"goldmane-666569f655-2zlcz\" (UID: \"14a0cb3a-c17b-419c-80e4-76ffe3aff4c5\") " pod="calico-system/goldmane-666569f655-2zlcz" Nov 1 00:22:15.134994 kubelet[2664]: I1101 00:22:15.134663 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b-calico-apiserver-certs\") pod \"calico-apiserver-7d65b76bbf-mht9v\" (UID: \"fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b\") " pod="calico-apiserver/calico-apiserver-7d65b76bbf-mht9v" Nov 1 00:22:15.134994 kubelet[2664]: I1101 00:22:15.134701 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4937286b-cd30-4d33-95a1-43e7f1688846-whisker-ca-bundle\") pod \"whisker-6f9d7d8847-x447l\" (UID: \"4937286b-cd30-4d33-95a1-43e7f1688846\") " pod="calico-system/whisker-6f9d7d8847-x447l" Nov 1 00:22:15.135231 kubelet[2664]: I1101 00:22:15.134716 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9637455a-d2d9-41ac-be89-aeef7331b819-config-volume\") pod \"coredns-668d6bf9bc-fxl85\" (UID: \"9637455a-d2d9-41ac-be89-aeef7331b819\") " pod="kube-system/coredns-668d6bf9bc-fxl85" Nov 1 00:22:15.135231 kubelet[2664]: I1101 00:22:15.134734 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/945dd47a-80ea-4932-9742-bcde28f179e6-config-volume\") pod \"coredns-668d6bf9bc-s6zgv\" (UID: \"945dd47a-80ea-4932-9742-bcde28f179e6\") " pod="kube-system/coredns-668d6bf9bc-s6zgv" Nov 1 00:22:15.135231 kubelet[2664]: I1101 00:22:15.134750 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14a0cb3a-c17b-419c-80e4-76ffe3aff4c5-goldmane-ca-bundle\") pod \"goldmane-666569f655-2zlcz\" (UID: \"14a0cb3a-c17b-419c-80e4-76ffe3aff4c5\") " pod="calico-system/goldmane-666569f655-2zlcz" Nov 1 00:22:15.135231 kubelet[2664]: I1101 00:22:15.134772 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwswh\" (UniqueName: \"kubernetes.io/projected/70d7c9dc-5ae1-4150-b4ab-1e59c014a05a-kube-api-access-gwswh\") pod \"calico-kube-controllers-76d974b5c6-z26qx\" (UID: \"70d7c9dc-5ae1-4150-b4ab-1e59c014a05a\") " pod="calico-system/calico-kube-controllers-76d974b5c6-z26qx" Nov 1 00:22:15.135231 kubelet[2664]: I1101 00:22:15.134795 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1172650d-8656-4c06-afa1-e156b3ef1286-calico-apiserver-certs\") pod \"calico-apiserver-7d65b76bbf-shhvk\" (UID: \"1172650d-8656-4c06-afa1-e156b3ef1286\") " pod="calico-apiserver/calico-apiserver-7d65b76bbf-shhvk" Nov 1 00:22:15.350216 kubelet[2664]: E1101 00:22:15.349988 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:15.352593 containerd[1578]: time="2025-11-01T00:22:15.351320847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s6zgv,Uid:945dd47a-80ea-4932-9742-bcde28f179e6,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:15.395960 kubelet[2664]: E1101 00:22:15.394018 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:15.396116 containerd[1578]: time="2025-11-01T00:22:15.394049155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d65b76bbf-shhvk,Uid:1172650d-8656-4c06-afa1-e156b3ef1286,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:22:15.401744 containerd[1578]: time="2025-11-01T00:22:15.401696081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fxl85,Uid:9637455a-d2d9-41ac-be89-aeef7331b819,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:15.403668 containerd[1578]: time="2025-11-01T00:22:15.403483704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76d974b5c6-z26qx,Uid:70d7c9dc-5ae1-4150-b4ab-1e59c014a05a,Namespace:calico-system,Attempt:0,}" Nov 1 00:22:15.410528 containerd[1578]: time="2025-11-01T00:22:15.407159750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d65b76bbf-mht9v,Uid:fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:22:15.410528 containerd[1578]: time="2025-11-01T00:22:15.409542548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:22:15.410711 kubelet[2664]: E1101 00:22:15.408115 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:15.425958 containerd[1578]: time="2025-11-01T00:22:15.424641230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2zlcz,Uid:14a0cb3a-c17b-419c-80e4-76ffe3aff4c5,Namespace:calico-system,Attempt:0,}" Nov 1 00:22:15.425958 containerd[1578]: time="2025-11-01T00:22:15.424951659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f9d7d8847-x447l,Uid:4937286b-cd30-4d33-95a1-43e7f1688846,Namespace:calico-system,Attempt:0,}" Nov 1 00:22:15.896410 containerd[1578]: time="2025-11-01T00:22:15.896180977Z" level=error msg="Failed to destroy network for sandbox \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.904808 containerd[1578]: time="2025-11-01T00:22:15.904598317Z" level=error msg="Failed to destroy network for sandbox \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.905427 containerd[1578]: time="2025-11-01T00:22:15.905216353Z" level=error msg="encountered an error cleaning up failed sandbox \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.905427 containerd[1578]: time="2025-11-01T00:22:15.905351966Z" level=error msg="encountered an error cleaning up failed sandbox \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.922771 containerd[1578]: time="2025-11-01T00:22:15.922347328Z" level=error msg="Failed to destroy network for sandbox \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.922771 containerd[1578]: time="2025-11-01T00:22:15.922583539Z" level=error msg="Failed to destroy network for sandbox \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.922771 containerd[1578]: time="2025-11-01T00:22:15.922764050Z" level=error msg="encountered an error cleaning up failed sandbox \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.923068 containerd[1578]: time="2025-11-01T00:22:15.922838243Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fxl85,Uid:9637455a-d2d9-41ac-be89-aeef7331b819,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.924809 containerd[1578]: time="2025-11-01T00:22:15.924667330Z" level=error msg="encountered an error cleaning up failed sandbox \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.924809 containerd[1578]: time="2025-11-01T00:22:15.924725239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d65b76bbf-shhvk,Uid:1172650d-8656-4c06-afa1-e156b3ef1286,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.930411 containerd[1578]: time="2025-11-01T00:22:15.929677296Z" level=error msg="Failed to destroy network for sandbox \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.932233 containerd[1578]: time="2025-11-01T00:22:15.930647943Z" level=error msg="encountered an error cleaning up failed sandbox \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.932233 containerd[1578]: time="2025-11-01T00:22:15.930707248Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2zlcz,Uid:14a0cb3a-c17b-419c-80e4-76ffe3aff4c5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.932233 containerd[1578]: time="2025-11-01T00:22:15.930783753Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s6zgv,Uid:945dd47a-80ea-4932-9742-bcde28f179e6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.932233 containerd[1578]: time="2025-11-01T00:22:15.930838956Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d65b76bbf-mht9v,Uid:fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.932797 kubelet[2664]: E1101 00:22:15.931880 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.932797 kubelet[2664]: E1101 00:22:15.932002 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d65b76bbf-mht9v" Nov 1 00:22:15.932797 kubelet[2664]: E1101 00:22:15.932041 2664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d65b76bbf-mht9v" Nov 1 00:22:15.933020 kubelet[2664]: E1101 00:22:15.932110 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d65b76bbf-mht9v_calico-apiserver(fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d65b76bbf-mht9v_calico-apiserver(fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d65b76bbf-mht9v" podUID="fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b" Nov 1 00:22:15.934187 kubelet[2664]: E1101 00:22:15.933229 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.934187 kubelet[2664]: E1101 00:22:15.933348 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d65b76bbf-shhvk" Nov 1 00:22:15.934187 kubelet[2664]: E1101 00:22:15.933434 2664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d65b76bbf-shhvk" Nov 1 00:22:15.934187 kubelet[2664]: E1101 00:22:15.933545 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.934541 kubelet[2664]: E1101 00:22:15.933611 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d65b76bbf-shhvk_calico-apiserver(1172650d-8656-4c06-afa1-e156b3ef1286)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d65b76bbf-shhvk_calico-apiserver(1172650d-8656-4c06-afa1-e156b3ef1286)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d65b76bbf-shhvk" podUID="1172650d-8656-4c06-afa1-e156b3ef1286" Nov 1 00:22:15.934541 kubelet[2664]: E1101 00:22:15.933683 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.934541 kubelet[2664]: E1101 00:22:15.933714 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fxl85" Nov 1 00:22:15.934759 kubelet[2664]: E1101 00:22:15.933740 2664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fxl85" Nov 1 00:22:15.934759 kubelet[2664]: E1101 00:22:15.933783 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fxl85_kube-system(9637455a-d2d9-41ac-be89-aeef7331b819)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fxl85_kube-system(9637455a-d2d9-41ac-be89-aeef7331b819)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fxl85" podUID="9637455a-d2d9-41ac-be89-aeef7331b819" Nov 1 00:22:15.934759 kubelet[2664]: E1101 00:22:15.933844 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.934942 kubelet[2664]: E1101 00:22:15.933873 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s6zgv" Nov 1 00:22:15.934942 kubelet[2664]: E1101 00:22:15.933892 2664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s6zgv" Nov 1 00:22:15.934942 kubelet[2664]: E1101 00:22:15.933928 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-s6zgv_kube-system(945dd47a-80ea-4932-9742-bcde28f179e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-s6zgv_kube-system(945dd47a-80ea-4932-9742-bcde28f179e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s6zgv" podUID="945dd47a-80ea-4932-9742-bcde28f179e6" Nov 1 00:22:15.935134 kubelet[2664]: E1101 00:22:15.934005 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-2zlcz" Nov 1 00:22:15.935134 kubelet[2664]: E1101 00:22:15.934035 2664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-2zlcz" Nov 1 00:22:15.935134 kubelet[2664]: E1101 00:22:15.934107 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-2zlcz_calico-system(14a0cb3a-c17b-419c-80e4-76ffe3aff4c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-2zlcz_calico-system(14a0cb3a-c17b-419c-80e4-76ffe3aff4c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-2zlcz" podUID="14a0cb3a-c17b-419c-80e4-76ffe3aff4c5" Nov 1 00:22:15.947761 containerd[1578]: time="2025-11-01T00:22:15.946722990Z" level=error msg="Failed to destroy network for sandbox \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.947761 containerd[1578]: time="2025-11-01T00:22:15.947139622Z" level=error msg="encountered an error cleaning up failed sandbox \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.947761 containerd[1578]: time="2025-11-01T00:22:15.947199464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76d974b5c6-z26qx,Uid:70d7c9dc-5ae1-4150-b4ab-1e59c014a05a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.948115 kubelet[2664]: E1101 00:22:15.947446 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.948115 kubelet[2664]: E1101 00:22:15.947504 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76d974b5c6-z26qx" Nov 1 00:22:15.948115 kubelet[2664]: E1101 00:22:15.947532 2664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76d974b5c6-z26qx" Nov 1 00:22:15.948310 kubelet[2664]: E1101 00:22:15.947572 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76d974b5c6-z26qx_calico-system(70d7c9dc-5ae1-4150-b4ab-1e59c014a05a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76d974b5c6-z26qx_calico-system(70d7c9dc-5ae1-4150-b4ab-1e59c014a05a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76d974b5c6-z26qx" podUID="70d7c9dc-5ae1-4150-b4ab-1e59c014a05a" Nov 1 00:22:15.955666 containerd[1578]: time="2025-11-01T00:22:15.955143209Z" level=error msg="Failed to destroy network for sandbox \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.956423 containerd[1578]: time="2025-11-01T00:22:15.956177489Z" level=error msg="encountered an error cleaning up failed sandbox \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.956423 containerd[1578]: time="2025-11-01T00:22:15.956283876Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f9d7d8847-x447l,Uid:4937286b-cd30-4d33-95a1-43e7f1688846,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.957934 kubelet[2664]: E1101 00:22:15.957102 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:15.957934 kubelet[2664]: E1101 00:22:15.957192 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f9d7d8847-x447l" Nov 1 00:22:15.957934 kubelet[2664]: E1101 00:22:15.957214 2664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f9d7d8847-x447l" Nov 1 00:22:15.958097 kubelet[2664]: E1101 00:22:15.957261 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f9d7d8847-x447l_calico-system(4937286b-cd30-4d33-95a1-43e7f1688846)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f9d7d8847-x447l_calico-system(4937286b-cd30-4d33-95a1-43e7f1688846)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f9d7d8847-x447l" podUID="4937286b-cd30-4d33-95a1-43e7f1688846" Nov 1 00:22:16.178858 containerd[1578]: time="2025-11-01T00:22:16.178695078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mglhw,Uid:c76f0dc0-2591-4062-8741-1604477875d5,Namespace:calico-system,Attempt:0,}" Nov 1 00:22:16.288843 containerd[1578]: time="2025-11-01T00:22:16.288753360Z" level=error msg="Failed to destroy network for sandbox \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:16.291196 containerd[1578]: time="2025-11-01T00:22:16.291108010Z" level=error msg="encountered an error cleaning up failed sandbox \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:16.293619 containerd[1578]: time="2025-11-01T00:22:16.291222813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mglhw,Uid:c76f0dc0-2591-4062-8741-1604477875d5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:16.294065 kubelet[2664]: E1101 00:22:16.293871 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:16.294065 kubelet[2664]: E1101 00:22:16.293996 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mglhw" Nov 1 00:22:16.294065 kubelet[2664]: E1101 00:22:16.294027 2664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mglhw" Nov 1 00:22:16.294903 kubelet[2664]: E1101 00:22:16.294485 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mglhw_calico-system(c76f0dc0-2591-4062-8741-1604477875d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mglhw_calico-system(c76f0dc0-2591-4062-8741-1604477875d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mglhw" podUID="c76f0dc0-2591-4062-8741-1604477875d5" Nov 1 00:22:16.299664 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e-shm.mount: Deactivated successfully. Nov 1 00:22:16.412449 kubelet[2664]: I1101 00:22:16.411116 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Nov 1 00:22:16.421704 kubelet[2664]: I1101 00:22:16.420940 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Nov 1 00:22:16.422928 containerd[1578]: time="2025-11-01T00:22:16.422168997Z" level=info msg="StopPodSandbox for \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\"" Nov 1 00:22:16.427232 containerd[1578]: time="2025-11-01T00:22:16.424435230Z" level=info msg="Ensure that sandbox 8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe in task-service has been cleanup successfully" Nov 1 00:22:16.428495 kubelet[2664]: I1101 00:22:16.428409 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Nov 1 00:22:16.428729 containerd[1578]: time="2025-11-01T00:22:16.428692109Z" level=info msg="StopPodSandbox for \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\"" Nov 1 00:22:16.429079 containerd[1578]: time="2025-11-01T00:22:16.428979305Z" level=info msg="Ensure that sandbox ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f in task-service has been cleanup successfully" Nov 1 00:22:16.431871 containerd[1578]: time="2025-11-01T00:22:16.431810557Z" level=info msg="StopPodSandbox for \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\"" Nov 1 00:22:16.432117 containerd[1578]: time="2025-11-01T00:22:16.432075792Z" level=info msg="Ensure that sandbox f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e in task-service has been cleanup successfully" Nov 1 00:22:16.434705 kubelet[2664]: I1101 00:22:16.434495 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Nov 1 00:22:16.440921 containerd[1578]: time="2025-11-01T00:22:16.439814179Z" level=info msg="StopPodSandbox for \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\"" Nov 1 00:22:16.442540 kubelet[2664]: I1101 00:22:16.442501 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Nov 1 00:22:16.447267 containerd[1578]: time="2025-11-01T00:22:16.447030275Z" level=info msg="Ensure that sandbox 926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e in task-service has been cleanup successfully" Nov 1 00:22:16.454231 containerd[1578]: time="2025-11-01T00:22:16.454148456Z" level=info msg="StopPodSandbox for \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\"" Nov 1 00:22:16.454888 containerd[1578]: time="2025-11-01T00:22:16.454671645Z" level=info msg="Ensure that sandbox c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9 in task-service has been cleanup successfully" Nov 1 00:22:16.516179 kubelet[2664]: I1101 00:22:16.516128 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Nov 1 00:22:16.538293 containerd[1578]: time="2025-11-01T00:22:16.538175990Z" level=info msg="StopPodSandbox for \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\"" Nov 1 00:22:16.538978 containerd[1578]: time="2025-11-01T00:22:16.538924892Z" level=info msg="Ensure that sandbox cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c in task-service has been cleanup successfully" Nov 1 00:22:16.557321 kubelet[2664]: I1101 00:22:16.557261 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Nov 1 00:22:16.560163 containerd[1578]: time="2025-11-01T00:22:16.559916324Z" level=info msg="StopPodSandbox for \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\"" Nov 1 00:22:16.568255 containerd[1578]: time="2025-11-01T00:22:16.567487821Z" level=info msg="Ensure that sandbox 32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601 in task-service has been cleanup successfully" Nov 1 00:22:16.580219 kubelet[2664]: I1101 00:22:16.580168 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Nov 1 00:22:16.584833 containerd[1578]: time="2025-11-01T00:22:16.583569370Z" level=info msg="StopPodSandbox for \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\"" Nov 1 00:22:16.591823 containerd[1578]: time="2025-11-01T00:22:16.591743385Z" level=info msg="Ensure that sandbox b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9 in task-service has been cleanup successfully" Nov 1 00:22:16.632217 containerd[1578]: time="2025-11-01T00:22:16.631287019Z" level=error msg="StopPodSandbox for \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\" failed" error="failed to destroy network for sandbox \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:16.632587 kubelet[2664]: E1101 00:22:16.632349 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Nov 1 00:22:16.632942 kubelet[2664]: E1101 00:22:16.632701 2664 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe"} Nov 1 00:22:16.633032 kubelet[2664]: E1101 00:22:16.632979 2664 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"945dd47a-80ea-4932-9742-bcde28f179e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:22:16.633032 kubelet[2664]: E1101 00:22:16.633015 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"945dd47a-80ea-4932-9742-bcde28f179e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s6zgv" podUID="945dd47a-80ea-4932-9742-bcde28f179e6" Nov 1 00:22:16.688858 containerd[1578]: time="2025-11-01T00:22:16.688642897Z" level=error msg="StopPodSandbox for \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\" failed" error="failed to destroy network for sandbox \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:16.692902 kubelet[2664]: E1101 00:22:16.692618 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Nov 1 00:22:16.692902 kubelet[2664]: E1101 00:22:16.692731 2664 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9"} Nov 1 00:22:16.692902 kubelet[2664]: E1101 00:22:16.692795 2664 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:22:16.692902 kubelet[2664]: E1101 00:22:16.692838 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d65b76bbf-mht9v" podUID="fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b" Nov 1 00:22:16.715448 containerd[1578]: time="2025-11-01T00:22:16.714918142Z" level=error msg="StopPodSandbox for \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\" failed" error="failed to destroy network for sandbox \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:16.715448 containerd[1578]: time="2025-11-01T00:22:16.714925528Z" level=error msg="StopPodSandbox for \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\" failed" error="failed to destroy network for sandbox \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:16.716258 kubelet[2664]: E1101 00:22:16.715813 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Nov 1 00:22:16.716585 kubelet[2664]: E1101 00:22:16.716169 2664 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e"} Nov 1 00:22:16.716680 kubelet[2664]: E1101 00:22:16.715980 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Nov 1 00:22:16.716680 kubelet[2664]: E1101 00:22:16.716643 2664 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f"} Nov 1 00:22:16.716786 kubelet[2664]: E1101 00:22:16.716702 2664 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4937286b-cd30-4d33-95a1-43e7f1688846\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:22:16.716786 kubelet[2664]: E1101 00:22:16.716737 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4937286b-cd30-4d33-95a1-43e7f1688846\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f9d7d8847-x447l" podUID="4937286b-cd30-4d33-95a1-43e7f1688846" Nov 1 00:22:16.717617 kubelet[2664]: E1101 00:22:16.717062 2664 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"14a0cb3a-c17b-419c-80e4-76ffe3aff4c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:22:16.717617 kubelet[2664]: E1101 00:22:16.717145 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"14a0cb3a-c17b-419c-80e4-76ffe3aff4c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-2zlcz" podUID="14a0cb3a-c17b-419c-80e4-76ffe3aff4c5" Nov 1 00:22:16.737456 containerd[1578]: time="2025-11-01T00:22:16.737221354Z" level=error msg="StopPodSandbox for \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\" failed" error="failed to destroy network for sandbox \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:16.740714 kubelet[2664]: E1101 00:22:16.740318 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Nov 1 00:22:16.742207 kubelet[2664]: E1101 00:22:16.741944 2664 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e"} Nov 1 00:22:16.742207 kubelet[2664]: E1101 00:22:16.742072 2664 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c76f0dc0-2591-4062-8741-1604477875d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:22:16.742207 kubelet[2664]: E1101 00:22:16.742130 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c76f0dc0-2591-4062-8741-1604477875d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mglhw" podUID="c76f0dc0-2591-4062-8741-1604477875d5" Nov 1 00:22:16.770060 containerd[1578]: time="2025-11-01T00:22:16.766790179Z" level=error msg="StopPodSandbox for \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\" failed" error="failed to destroy network for sandbox \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:16.770214 kubelet[2664]: E1101 00:22:16.768547 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Nov 1 00:22:16.770214 kubelet[2664]: E1101 00:22:16.768631 2664 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601"} Nov 1 00:22:16.770214 kubelet[2664]: E1101 00:22:16.768672 2664 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"70d7c9dc-5ae1-4150-b4ab-1e59c014a05a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:22:16.770214 kubelet[2664]: E1101 00:22:16.768704 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"70d7c9dc-5ae1-4150-b4ab-1e59c014a05a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76d974b5c6-z26qx" podUID="70d7c9dc-5ae1-4150-b4ab-1e59c014a05a" Nov 1 00:22:16.773063 containerd[1578]: time="2025-11-01T00:22:16.773010822Z" level=error msg="StopPodSandbox for \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\" failed" error="failed to destroy network for sandbox \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:16.773508 kubelet[2664]: E1101 00:22:16.773467 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Nov 1 00:22:16.773678 kubelet[2664]: E1101 00:22:16.773657 2664 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c"} Nov 1 00:22:16.773813 kubelet[2664]: E1101 00:22:16.773743 2664 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9637455a-d2d9-41ac-be89-aeef7331b819\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:22:16.773813 kubelet[2664]: E1101 00:22:16.773784 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9637455a-d2d9-41ac-be89-aeef7331b819\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fxl85" podUID="9637455a-d2d9-41ac-be89-aeef7331b819" Nov 1 00:22:16.780558 containerd[1578]: time="2025-11-01T00:22:16.780497809Z" level=error msg="StopPodSandbox for \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\" failed" error="failed to destroy network for sandbox \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:16.781076 kubelet[2664]: E1101 00:22:16.781033 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Nov 1 00:22:16.781365 kubelet[2664]: E1101 00:22:16.781338 2664 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9"} Nov 1 00:22:16.781569 kubelet[2664]: E1101 00:22:16.781553 2664 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1172650d-8656-4c06-afa1-e156b3ef1286\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:22:16.781696 kubelet[2664]: E1101 00:22:16.781672 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1172650d-8656-4c06-afa1-e156b3ef1286\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d65b76bbf-shhvk" podUID="1172650d-8656-4c06-afa1-e156b3ef1286" Nov 1 00:22:22.264217 systemd-journald[1137]: Under memory pressure, flushing caches. Nov 1 00:22:22.261699 systemd-resolved[1480]: Under memory pressure, flushing caches. Nov 1 00:22:22.261779 systemd-resolved[1480]: Flushed all caches. Nov 1 00:22:23.582124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1701224776.mount: Deactivated successfully. Nov 1 00:22:23.667400 containerd[1578]: time="2025-11-01T00:22:23.666123862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 00:22:23.674561 containerd[1578]: time="2025-11-01T00:22:23.674497468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:23.679435 containerd[1578]: time="2025-11-01T00:22:23.679070403Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.261409839s" Nov 1 00:22:23.679435 containerd[1578]: time="2025-11-01T00:22:23.679161421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:22:23.698418 containerd[1578]: time="2025-11-01T00:22:23.698098371Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:23.700926 containerd[1578]: time="2025-11-01T00:22:23.700863558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:23.793703 containerd[1578]: time="2025-11-01T00:22:23.793446260Z" level=info msg="CreateContainer within sandbox \"2cdc6fc5598a55d07fad246bb53e1ed028b7d3fc313375e688f643e82da67b62\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:22:23.874512 containerd[1578]: time="2025-11-01T00:22:23.874319406Z" level=info msg="CreateContainer within sandbox \"2cdc6fc5598a55d07fad246bb53e1ed028b7d3fc313375e688f643e82da67b62\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"97ddec0a5c514651ecb57c1e94a521cc731d3382fbfcc06b086d9890310dc818\"" Nov 1 00:22:23.876072 containerd[1578]: time="2025-11-01T00:22:23.876026889Z" level=info msg="StartContainer for \"97ddec0a5c514651ecb57c1e94a521cc731d3382fbfcc06b086d9890310dc818\"" Nov 1 00:22:24.103578 containerd[1578]: time="2025-11-01T00:22:24.103517929Z" level=info msg="StartContainer for \"97ddec0a5c514651ecb57c1e94a521cc731d3382fbfcc06b086d9890310dc818\" returns successfully" Nov 1 00:22:24.240814 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:22:24.241003 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:22:24.311857 systemd-journald[1137]: Under memory pressure, flushing caches. Nov 1 00:22:24.311508 systemd-resolved[1480]: Under memory pressure, flushing caches. Nov 1 00:22:24.311518 systemd-resolved[1480]: Flushed all caches. Nov 1 00:22:24.447928 containerd[1578]: time="2025-11-01T00:22:24.447524282Z" level=info msg="StopPodSandbox for \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\"" Nov 1 00:22:24.688405 kubelet[2664]: E1101 00:22:24.686325 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:24.792817 kubelet[2664]: I1101 00:22:24.779537 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9gxmj" podStartSLOduration=2.299361661 podStartE2EDuration="20.741769094s" podCreationTimestamp="2025-11-01 00:22:04 +0000 UTC" firstStartedPulling="2025-11-01 00:22:05.25704002 +0000 UTC m=+24.304517279" lastFinishedPulling="2025-11-01 00:22:23.699447453 +0000 UTC m=+42.746924712" observedRunningTime="2025-11-01 00:22:24.736714789 +0000 UTC m=+43.784192070" watchObservedRunningTime="2025-11-01 00:22:24.741769094 +0000 UTC m=+43.789246376" Nov 1 00:22:24.895206 containerd[1578]: 2025-11-01 00:22:24.613 [INFO][3850] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Nov 1 00:22:24.895206 containerd[1578]: 2025-11-01 00:22:24.617 [INFO][3850] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" iface="eth0" netns="/var/run/netns/cni-bb859a66-ad59-2afd-3753-52fefbf1069a" Nov 1 00:22:24.895206 containerd[1578]: 2025-11-01 00:22:24.618 [INFO][3850] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" iface="eth0" netns="/var/run/netns/cni-bb859a66-ad59-2afd-3753-52fefbf1069a" Nov 1 00:22:24.895206 containerd[1578]: 2025-11-01 00:22:24.619 [INFO][3850] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" iface="eth0" netns="/var/run/netns/cni-bb859a66-ad59-2afd-3753-52fefbf1069a" Nov 1 00:22:24.895206 containerd[1578]: 2025-11-01 00:22:24.619 [INFO][3850] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Nov 1 00:22:24.895206 containerd[1578]: 2025-11-01 00:22:24.619 [INFO][3850] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Nov 1 00:22:24.895206 containerd[1578]: 2025-11-01 00:22:24.869 [INFO][3859] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" HandleID="k8s-pod-network.ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Workload="ci--4081.3.6--n--f16f13e513-k8s-whisker--6f9d7d8847--x447l-eth0" Nov 1 00:22:24.895206 containerd[1578]: 2025-11-01 00:22:24.871 [INFO][3859] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:24.895206 containerd[1578]: 2025-11-01 00:22:24.872 [INFO][3859] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:24.895206 containerd[1578]: 2025-11-01 00:22:24.887 [WARNING][3859] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" HandleID="k8s-pod-network.ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Workload="ci--4081.3.6--n--f16f13e513-k8s-whisker--6f9d7d8847--x447l-eth0" Nov 1 00:22:24.895206 containerd[1578]: 2025-11-01 00:22:24.887 [INFO][3859] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" HandleID="k8s-pod-network.ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Workload="ci--4081.3.6--n--f16f13e513-k8s-whisker--6f9d7d8847--x447l-eth0" Nov 1 00:22:24.895206 containerd[1578]: 2025-11-01 00:22:24.889 [INFO][3859] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:24.895206 containerd[1578]: 2025-11-01 00:22:24.892 [INFO][3850] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Nov 1 00:22:24.895206 containerd[1578]: time="2025-11-01T00:22:24.894993717Z" level=info msg="TearDown network for sandbox \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\" successfully" Nov 1 00:22:24.895206 containerd[1578]: time="2025-11-01T00:22:24.895027540Z" level=info msg="StopPodSandbox for \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\" returns successfully" Nov 1 00:22:24.903603 systemd[1]: run-netns-cni\x2dbb859a66\x2dad59\x2d2afd\x2d3753\x2d52fefbf1069a.mount: Deactivated successfully. Nov 1 00:22:25.031332 kubelet[2664]: I1101 00:22:25.030397 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl7zj\" (UniqueName: \"kubernetes.io/projected/4937286b-cd30-4d33-95a1-43e7f1688846-kube-api-access-zl7zj\") pod \"4937286b-cd30-4d33-95a1-43e7f1688846\" (UID: \"4937286b-cd30-4d33-95a1-43e7f1688846\") " Nov 1 00:22:25.031332 kubelet[2664]: I1101 00:22:25.030477 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4937286b-cd30-4d33-95a1-43e7f1688846-whisker-backend-key-pair\") pod \"4937286b-cd30-4d33-95a1-43e7f1688846\" (UID: \"4937286b-cd30-4d33-95a1-43e7f1688846\") " Nov 1 00:22:25.031332 kubelet[2664]: I1101 00:22:25.030544 2664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4937286b-cd30-4d33-95a1-43e7f1688846-whisker-ca-bundle\") pod \"4937286b-cd30-4d33-95a1-43e7f1688846\" (UID: \"4937286b-cd30-4d33-95a1-43e7f1688846\") " Nov 1 00:22:25.034193 kubelet[2664]: I1101 00:22:25.031130 2664 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4937286b-cd30-4d33-95a1-43e7f1688846-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4937286b-cd30-4d33-95a1-43e7f1688846" (UID: "4937286b-cd30-4d33-95a1-43e7f1688846"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:22:25.048801 systemd[1]: var-lib-kubelet-pods-4937286b\x2dcd30\x2d4d33\x2d95a1\x2d43e7f1688846-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzl7zj.mount: Deactivated successfully. Nov 1 00:22:25.049043 systemd[1]: var-lib-kubelet-pods-4937286b\x2dcd30\x2d4d33\x2d95a1\x2d43e7f1688846-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:22:25.050592 kubelet[2664]: I1101 00:22:25.049887 2664 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4937286b-cd30-4d33-95a1-43e7f1688846-kube-api-access-zl7zj" (OuterVolumeSpecName: "kube-api-access-zl7zj") pod "4937286b-cd30-4d33-95a1-43e7f1688846" (UID: "4937286b-cd30-4d33-95a1-43e7f1688846"). InnerVolumeSpecName "kube-api-access-zl7zj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:22:25.053584 kubelet[2664]: I1101 00:22:25.053467 2664 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4937286b-cd30-4d33-95a1-43e7f1688846-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4937286b-cd30-4d33-95a1-43e7f1688846" (UID: "4937286b-cd30-4d33-95a1-43e7f1688846"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:22:25.131310 kubelet[2664]: I1101 00:22:25.131197 2664 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4937286b-cd30-4d33-95a1-43e7f1688846-whisker-ca-bundle\") on node \"ci-4081.3.6-n-f16f13e513\" DevicePath \"\"" Nov 1 00:22:25.131310 kubelet[2664]: I1101 00:22:25.131254 2664 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zl7zj\" (UniqueName: \"kubernetes.io/projected/4937286b-cd30-4d33-95a1-43e7f1688846-kube-api-access-zl7zj\") on node \"ci-4081.3.6-n-f16f13e513\" DevicePath \"\"" Nov 1 00:22:25.131310 kubelet[2664]: I1101 00:22:25.131271 2664 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4937286b-cd30-4d33-95a1-43e7f1688846-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-f16f13e513\" DevicePath \"\"" Nov 1 00:22:25.684713 kubelet[2664]: I1101 00:22:25.684665 2664 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:22:25.685410 kubelet[2664]: E1101 00:22:25.685077 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:25.936685 kubelet[2664]: I1101 00:22:25.935983 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zf67\" (UniqueName: \"kubernetes.io/projected/6a14ce9d-f1ba-4792-af7e-32782e663117-kube-api-access-5zf67\") pod \"whisker-7c49b9c9dc-rzlzn\" (UID: \"6a14ce9d-f1ba-4792-af7e-32782e663117\") " pod="calico-system/whisker-7c49b9c9dc-rzlzn" Nov 1 00:22:25.936685 kubelet[2664]: I1101 00:22:25.936042 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a14ce9d-f1ba-4792-af7e-32782e663117-whisker-ca-bundle\") pod \"whisker-7c49b9c9dc-rzlzn\" (UID: \"6a14ce9d-f1ba-4792-af7e-32782e663117\") " pod="calico-system/whisker-7c49b9c9dc-rzlzn" Nov 1 00:22:25.936685 kubelet[2664]: I1101 00:22:25.936073 2664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6a14ce9d-f1ba-4792-af7e-32782e663117-whisker-backend-key-pair\") pod \"whisker-7c49b9c9dc-rzlzn\" (UID: \"6a14ce9d-f1ba-4792-af7e-32782e663117\") " pod="calico-system/whisker-7c49b9c9dc-rzlzn" Nov 1 00:22:26.105416 containerd[1578]: time="2025-11-01T00:22:26.104444297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c49b9c9dc-rzlzn,Uid:6a14ce9d-f1ba-4792-af7e-32782e663117,Namespace:calico-system,Attempt:0,}" Nov 1 00:22:26.362208 systemd-journald[1137]: Under memory pressure, flushing caches. Nov 1 00:22:26.357818 systemd-resolved[1480]: Under memory pressure, flushing caches. Nov 1 00:22:26.357833 systemd-resolved[1480]: Flushed all caches. Nov 1 00:22:26.584231 systemd-networkd[1227]: cali52731946e01: Link UP Nov 1 00:22:26.586083 systemd-networkd[1227]: cali52731946e01: Gained carrier Nov 1 00:22:26.649479 containerd[1578]: 2025-11-01 00:22:26.263 [INFO][3970] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:22:26.649479 containerd[1578]: 2025-11-01 00:22:26.287 [INFO][3970] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f16f13e513-k8s-whisker--7c49b9c9dc--rzlzn-eth0 whisker-7c49b9c9dc- calico-system 6a14ce9d-f1ba-4792-af7e-32782e663117 928 0 2025-11-01 00:22:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7c49b9c9dc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-f16f13e513 whisker-7c49b9c9dc-rzlzn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali52731946e01 [] [] }} ContainerID="1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00" Namespace="calico-system" Pod="whisker-7c49b9c9dc-rzlzn" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-whisker--7c49b9c9dc--rzlzn-" Nov 1 00:22:26.649479 containerd[1578]: 2025-11-01 00:22:26.289 [INFO][3970] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00" Namespace="calico-system" Pod="whisker-7c49b9c9dc-rzlzn" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-whisker--7c49b9c9dc--rzlzn-eth0" Nov 1 00:22:26.649479 containerd[1578]: 2025-11-01 00:22:26.420 [INFO][3980] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00" HandleID="k8s-pod-network.1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00" Workload="ci--4081.3.6--n--f16f13e513-k8s-whisker--7c49b9c9dc--rzlzn-eth0" Nov 1 00:22:26.649479 containerd[1578]: 2025-11-01 00:22:26.425 [INFO][3980] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00" HandleID="k8s-pod-network.1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00" Workload="ci--4081.3.6--n--f16f13e513-k8s-whisker--7c49b9c9dc--rzlzn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003323b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-f16f13e513", "pod":"whisker-7c49b9c9dc-rzlzn", "timestamp":"2025-11-01 00:22:26.42095326 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f16f13e513", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:26.649479 containerd[1578]: 2025-11-01 00:22:26.427 [INFO][3980] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:26.649479 containerd[1578]: 2025-11-01 00:22:26.427 [INFO][3980] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:26.649479 containerd[1578]: 2025-11-01 00:22:26.427 [INFO][3980] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f16f13e513' Nov 1 00:22:26.649479 containerd[1578]: 2025-11-01 00:22:26.453 [INFO][3980] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:26.649479 containerd[1578]: 2025-11-01 00:22:26.477 [INFO][3980] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:26.649479 containerd[1578]: 2025-11-01 00:22:26.487 [INFO][3980] ipam/ipam.go 511: Trying affinity for 192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:26.649479 containerd[1578]: 2025-11-01 00:22:26.495 [INFO][3980] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:26.649479 containerd[1578]: 2025-11-01 00:22:26.505 [INFO][3980] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:26.649479 containerd[1578]: 2025-11-01 00:22:26.505 [INFO][3980] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.17.192/26 handle="k8s-pod-network.1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:26.649479 containerd[1578]: 2025-11-01 00:22:26.508 [INFO][3980] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00 Nov 1 00:22:26.649479 containerd[1578]: 2025-11-01 00:22:26.516 [INFO][3980] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.17.192/26 handle="k8s-pod-network.1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:26.649479 containerd[1578]: 2025-11-01 00:22:26.536 [INFO][3980] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.17.193/26] block=192.168.17.192/26 handle="k8s-pod-network.1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:26.649479 containerd[1578]: 2025-11-01 00:22:26.537 [INFO][3980] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.193/26] handle="k8s-pod-network.1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:26.649479 containerd[1578]: 2025-11-01 00:22:26.537 [INFO][3980] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:26.649479 containerd[1578]: 2025-11-01 00:22:26.537 [INFO][3980] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.17.193/26] IPv6=[] ContainerID="1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00" HandleID="k8s-pod-network.1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00" Workload="ci--4081.3.6--n--f16f13e513-k8s-whisker--7c49b9c9dc--rzlzn-eth0" Nov 1 00:22:26.657387 containerd[1578]: 2025-11-01 00:22:26.550 [INFO][3970] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00" Namespace="calico-system" Pod="whisker-7c49b9c9dc-rzlzn" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-whisker--7c49b9c9dc--rzlzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-whisker--7c49b9c9dc--rzlzn-eth0", GenerateName:"whisker-7c49b9c9dc-", Namespace:"calico-system", SelfLink:"", UID:"6a14ce9d-f1ba-4792-af7e-32782e663117", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c49b9c9dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"", Pod:"whisker-7c49b9c9dc-rzlzn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.17.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali52731946e01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:26.657387 containerd[1578]: 2025-11-01 00:22:26.551 [INFO][3970] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.193/32] ContainerID="1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00" Namespace="calico-system" Pod="whisker-7c49b9c9dc-rzlzn" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-whisker--7c49b9c9dc--rzlzn-eth0" Nov 1 00:22:26.657387 containerd[1578]: 2025-11-01 00:22:26.551 [INFO][3970] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali52731946e01 ContainerID="1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00" Namespace="calico-system" Pod="whisker-7c49b9c9dc-rzlzn" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-whisker--7c49b9c9dc--rzlzn-eth0" Nov 1 00:22:26.657387 containerd[1578]: 2025-11-01 00:22:26.581 [INFO][3970] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00" Namespace="calico-system" Pod="whisker-7c49b9c9dc-rzlzn" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-whisker--7c49b9c9dc--rzlzn-eth0" Nov 1 00:22:26.657387 containerd[1578]: 2025-11-01 00:22:26.593 [INFO][3970] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00" Namespace="calico-system" Pod="whisker-7c49b9c9dc-rzlzn" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-whisker--7c49b9c9dc--rzlzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-whisker--7c49b9c9dc--rzlzn-eth0", GenerateName:"whisker-7c49b9c9dc-", Namespace:"calico-system", SelfLink:"", UID:"6a14ce9d-f1ba-4792-af7e-32782e663117", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c49b9c9dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00", Pod:"whisker-7c49b9c9dc-rzlzn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.17.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali52731946e01", MAC:"e2:9f:d6:e9:11:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:26.657387 containerd[1578]: 2025-11-01 00:22:26.632 [INFO][3970] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00" Namespace="calico-system" Pod="whisker-7c49b9c9dc-rzlzn" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-whisker--7c49b9c9dc--rzlzn-eth0" Nov 1 00:22:26.731448 kernel: bpftool[4032]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 00:22:26.754676 containerd[1578]: time="2025-11-01T00:22:26.754471704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:26.754676 containerd[1578]: time="2025-11-01T00:22:26.754584712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:26.754676 containerd[1578]: time="2025-11-01T00:22:26.754597135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:26.758539 containerd[1578]: time="2025-11-01T00:22:26.758410780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:26.896958 containerd[1578]: time="2025-11-01T00:22:26.896850391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c49b9c9dc-rzlzn,Uid:6a14ce9d-f1ba-4792-af7e-32782e663117,Namespace:calico-system,Attempt:0,} returns sandbox id \"1b833b7c2fa62c05e9b7c5ad2a588cf883da3ca1896f1c31ac6bb913d41c5a00\"" Nov 1 00:22:26.908594 containerd[1578]: time="2025-11-01T00:22:26.907577210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:22:27.118199 systemd-networkd[1227]: vxlan.calico: Link UP Nov 1 00:22:27.118208 systemd-networkd[1227]: vxlan.calico: Gained carrier Nov 1 00:22:27.198601 kubelet[2664]: I1101 00:22:27.198227 2664 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4937286b-cd30-4d33-95a1-43e7f1688846" path="/var/lib/kubelet/pods/4937286b-cd30-4d33-95a1-43e7f1688846/volumes" Nov 1 00:22:27.251521 containerd[1578]: time="2025-11-01T00:22:27.251186598Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:27.276444 containerd[1578]: time="2025-11-01T00:22:27.257101212Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:22:27.276444 containerd[1578]: time="2025-11-01T00:22:27.257443103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:22:27.276718 kubelet[2664]: E1101 00:22:27.276207 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:22:27.277256 kubelet[2664]: E1101 00:22:27.277189 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:22:27.313614 kubelet[2664]: E1101 00:22:27.313489 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:00518e1e75554aa7b4b7d6589bc5691e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5zf67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c49b9c9dc-rzlzn_calico-system(6a14ce9d-f1ba-4792-af7e-32782e663117): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:27.317081 containerd[1578]: time="2025-11-01T00:22:27.316679148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:22:27.637925 containerd[1578]: time="2025-11-01T00:22:27.637520152Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:27.642723 containerd[1578]: time="2025-11-01T00:22:27.642652027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:22:27.643060 containerd[1578]: time="2025-11-01T00:22:27.642695959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:22:27.643260 kubelet[2664]: E1101 00:22:27.643218 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:22:27.643330 kubelet[2664]: E1101 00:22:27.643276 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:22:27.645420 kubelet[2664]: E1101 00:22:27.645264 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5zf67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c49b9c9dc-rzlzn_calico-system(6a14ce9d-f1ba-4792-af7e-32782e663117): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:27.653822 kubelet[2664]: E1101 00:22:27.653139 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c49b9c9dc-rzlzn" podUID="6a14ce9d-f1ba-4792-af7e-32782e663117" Nov 1 00:22:27.699237 kubelet[2664]: E1101 00:22:27.699036 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c49b9c9dc-rzlzn" podUID="6a14ce9d-f1ba-4792-af7e-32782e663117" Nov 1 00:22:28.149912 systemd-networkd[1227]: cali52731946e01: Gained IPv6LL Nov 1 00:22:28.178005 containerd[1578]: time="2025-11-01T00:22:28.177945582Z" level=info msg="StopPodSandbox for \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\"" Nov 1 00:22:28.322913 containerd[1578]: 2025-11-01 00:22:28.267 [INFO][4158] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Nov 1 00:22:28.322913 containerd[1578]: 2025-11-01 00:22:28.268 [INFO][4158] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" iface="eth0" netns="/var/run/netns/cni-6f2527f6-8b7e-7345-26db-39116789c4ee" Nov 1 00:22:28.322913 containerd[1578]: 2025-11-01 00:22:28.270 [INFO][4158] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" iface="eth0" netns="/var/run/netns/cni-6f2527f6-8b7e-7345-26db-39116789c4ee" Nov 1 00:22:28.322913 containerd[1578]: 2025-11-01 00:22:28.270 [INFO][4158] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" iface="eth0" netns="/var/run/netns/cni-6f2527f6-8b7e-7345-26db-39116789c4ee" Nov 1 00:22:28.322913 containerd[1578]: 2025-11-01 00:22:28.270 [INFO][4158] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Nov 1 00:22:28.322913 containerd[1578]: 2025-11-01 00:22:28.270 [INFO][4158] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Nov 1 00:22:28.322913 containerd[1578]: 2025-11-01 00:22:28.303 [INFO][4166] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" HandleID="k8s-pod-network.b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0" Nov 1 00:22:28.322913 containerd[1578]: 2025-11-01 00:22:28.303 [INFO][4166] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:28.322913 containerd[1578]: 2025-11-01 00:22:28.303 [INFO][4166] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:28.322913 containerd[1578]: 2025-11-01 00:22:28.313 [WARNING][4166] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" HandleID="k8s-pod-network.b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0" Nov 1 00:22:28.322913 containerd[1578]: 2025-11-01 00:22:28.313 [INFO][4166] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" HandleID="k8s-pod-network.b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0" Nov 1 00:22:28.322913 containerd[1578]: 2025-11-01 00:22:28.316 [INFO][4166] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:28.322913 containerd[1578]: 2025-11-01 00:22:28.319 [INFO][4158] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Nov 1 00:22:28.327155 containerd[1578]: time="2025-11-01T00:22:28.323232899Z" level=info msg="TearDown network for sandbox \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\" successfully" Nov 1 00:22:28.327155 containerd[1578]: time="2025-11-01T00:22:28.323277010Z" level=info msg="StopPodSandbox for \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\" returns successfully" Nov 1 00:22:28.327155 containerd[1578]: time="2025-11-01T00:22:28.324696384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d65b76bbf-shhvk,Uid:1172650d-8656-4c06-afa1-e156b3ef1286,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:22:28.330183 systemd[1]: run-netns-cni\x2d6f2527f6\x2d8b7e\x2d7345\x2d26db\x2d39116789c4ee.mount: Deactivated successfully. Nov 1 00:22:28.538346 systemd-networkd[1227]: cali18f1a722935: Link UP Nov 1 00:22:28.544013 systemd-networkd[1227]: cali18f1a722935: Gained carrier Nov 1 00:22:28.582751 containerd[1578]: 2025-11-01 00:22:28.418 [INFO][4173] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0 calico-apiserver-7d65b76bbf- calico-apiserver 1172650d-8656-4c06-afa1-e156b3ef1286 949 0 2025-11-01 00:21:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d65b76bbf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-f16f13e513 calico-apiserver-7d65b76bbf-shhvk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali18f1a722935 [] [] }} ContainerID="7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943" Namespace="calico-apiserver" Pod="calico-apiserver-7d65b76bbf-shhvk" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-" Nov 1 00:22:28.582751 containerd[1578]: 2025-11-01 00:22:28.419 [INFO][4173] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943" Namespace="calico-apiserver" Pod="calico-apiserver-7d65b76bbf-shhvk" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0" Nov 1 00:22:28.582751 containerd[1578]: 2025-11-01 00:22:28.461 [INFO][4185] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943" HandleID="k8s-pod-network.7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0" Nov 1 00:22:28.582751 containerd[1578]: 2025-11-01 00:22:28.461 [INFO][4185] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943" HandleID="k8s-pod-network.7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-f16f13e513", "pod":"calico-apiserver-7d65b76bbf-shhvk", "timestamp":"2025-11-01 00:22:28.461410716 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f16f13e513", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:28.582751 containerd[1578]: 2025-11-01 00:22:28.461 [INFO][4185] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:28.582751 containerd[1578]: 2025-11-01 00:22:28.461 [INFO][4185] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:28.582751 containerd[1578]: 2025-11-01 00:22:28.461 [INFO][4185] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f16f13e513' Nov 1 00:22:28.582751 containerd[1578]: 2025-11-01 00:22:28.474 [INFO][4185] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:28.582751 containerd[1578]: 2025-11-01 00:22:28.486 [INFO][4185] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:28.582751 containerd[1578]: 2025-11-01 00:22:28.494 [INFO][4185] ipam/ipam.go 511: Trying affinity for 192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:28.582751 containerd[1578]: 2025-11-01 00:22:28.497 [INFO][4185] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:28.582751 containerd[1578]: 2025-11-01 00:22:28.502 [INFO][4185] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:28.582751 containerd[1578]: 2025-11-01 00:22:28.503 [INFO][4185] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.17.192/26 handle="k8s-pod-network.7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:28.582751 containerd[1578]: 2025-11-01 00:22:28.506 [INFO][4185] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943 Nov 1 00:22:28.582751 containerd[1578]: 2025-11-01 00:22:28.514 [INFO][4185] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.17.192/26 handle="k8s-pod-network.7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:28.582751 containerd[1578]: 2025-11-01 00:22:28.524 [INFO][4185] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.17.194/26] block=192.168.17.192/26 handle="k8s-pod-network.7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:28.582751 containerd[1578]: 2025-11-01 00:22:28.525 [INFO][4185] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.194/26] handle="k8s-pod-network.7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:28.582751 containerd[1578]: 2025-11-01 00:22:28.526 [INFO][4185] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:28.582751 containerd[1578]: 2025-11-01 00:22:28.526 [INFO][4185] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.17.194/26] IPv6=[] ContainerID="7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943" HandleID="k8s-pod-network.7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0" Nov 1 00:22:28.585552 containerd[1578]: 2025-11-01 00:22:28.531 [INFO][4173] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943" Namespace="calico-apiserver" Pod="calico-apiserver-7d65b76bbf-shhvk" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0", GenerateName:"calico-apiserver-7d65b76bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"1172650d-8656-4c06-afa1-e156b3ef1286", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d65b76bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"", Pod:"calico-apiserver-7d65b76bbf-shhvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali18f1a722935", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:28.585552 containerd[1578]: 2025-11-01 00:22:28.531 [INFO][4173] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.194/32] ContainerID="7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943" Namespace="calico-apiserver" Pod="calico-apiserver-7d65b76bbf-shhvk" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0" Nov 1 00:22:28.585552 containerd[1578]: 2025-11-01 00:22:28.532 [INFO][4173] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18f1a722935 ContainerID="7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943" Namespace="calico-apiserver" Pod="calico-apiserver-7d65b76bbf-shhvk" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0" Nov 1 00:22:28.585552 containerd[1578]: 2025-11-01 00:22:28.552 [INFO][4173] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943" Namespace="calico-apiserver" Pod="calico-apiserver-7d65b76bbf-shhvk" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0" Nov 1 00:22:28.585552 containerd[1578]: 2025-11-01 00:22:28.553 [INFO][4173] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943" Namespace="calico-apiserver" Pod="calico-apiserver-7d65b76bbf-shhvk" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0", GenerateName:"calico-apiserver-7d65b76bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"1172650d-8656-4c06-afa1-e156b3ef1286", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d65b76bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943", Pod:"calico-apiserver-7d65b76bbf-shhvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali18f1a722935", MAC:"da:09:53:91:65:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:28.585552 containerd[1578]: 2025-11-01 00:22:28.576 [INFO][4173] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943" Namespace="calico-apiserver" Pod="calico-apiserver-7d65b76bbf-shhvk" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0" Nov 1 00:22:28.636083 containerd[1578]: time="2025-11-01T00:22:28.635878223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:28.637409 containerd[1578]: time="2025-11-01T00:22:28.636359731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:28.637409 containerd[1578]: time="2025-11-01T00:22:28.636456480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:28.638106 containerd[1578]: time="2025-11-01T00:22:28.637800701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:28.710757 systemd[1]: run-containerd-runc-k8s.io-7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943-runc.uVUoYd.mount: Deactivated successfully. Nov 1 00:22:28.717631 kubelet[2664]: E1101 00:22:28.717514 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c49b9c9dc-rzlzn" podUID="6a14ce9d-f1ba-4792-af7e-32782e663117" Nov 1 00:22:28.942078 kubelet[2664]: I1101 00:22:28.941994 2664 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:22:28.942675 kubelet[2664]: E1101 00:22:28.942651 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:28.950865 containerd[1578]: time="2025-11-01T00:22:28.950546278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d65b76bbf-shhvk,Uid:1172650d-8656-4c06-afa1-e156b3ef1286,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943\"" Nov 1 00:22:28.954049 containerd[1578]: time="2025-11-01T00:22:28.953475038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:22:29.173631 systemd-networkd[1227]: vxlan.calico: Gained IPv6LL Nov 1 00:22:29.176360 containerd[1578]: time="2025-11-01T00:22:29.175896375Z" level=info msg="StopPodSandbox for \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\"" Nov 1 00:22:29.293180 containerd[1578]: time="2025-11-01T00:22:29.293006351Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:29.296956 containerd[1578]: time="2025-11-01T00:22:29.296781537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:22:29.297708 containerd[1578]: time="2025-11-01T00:22:29.296597801Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:22:29.298779 kubelet[2664]: E1101 00:22:29.298720 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:29.298952 kubelet[2664]: E1101 00:22:29.298790 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:29.307470 kubelet[2664]: E1101 00:22:29.304955 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-spw79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d65b76bbf-shhvk_calico-apiserver(1172650d-8656-4c06-afa1-e156b3ef1286): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:29.307470 kubelet[2664]: E1101 00:22:29.307113 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d65b76bbf-shhvk" podUID="1172650d-8656-4c06-afa1-e156b3ef1286" Nov 1 00:22:29.421686 containerd[1578]: 2025-11-01 00:22:29.302 [INFO][4272] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Nov 1 00:22:29.421686 containerd[1578]: 2025-11-01 00:22:29.302 [INFO][4272] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" iface="eth0" netns="/var/run/netns/cni-cd4bda56-5945-4144-e953-b58b8a548ed3" Nov 1 00:22:29.421686 containerd[1578]: 2025-11-01 00:22:29.303 [INFO][4272] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" iface="eth0" netns="/var/run/netns/cni-cd4bda56-5945-4144-e953-b58b8a548ed3" Nov 1 00:22:29.421686 containerd[1578]: 2025-11-01 00:22:29.305 [INFO][4272] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" iface="eth0" netns="/var/run/netns/cni-cd4bda56-5945-4144-e953-b58b8a548ed3" Nov 1 00:22:29.421686 containerd[1578]: 2025-11-01 00:22:29.305 [INFO][4272] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Nov 1 00:22:29.421686 containerd[1578]: 2025-11-01 00:22:29.306 [INFO][4272] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Nov 1 00:22:29.421686 containerd[1578]: 2025-11-01 00:22:29.384 [INFO][4280] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" HandleID="k8s-pod-network.926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Workload="ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0" Nov 1 00:22:29.421686 containerd[1578]: 2025-11-01 00:22:29.384 [INFO][4280] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:29.421686 containerd[1578]: 2025-11-01 00:22:29.384 [INFO][4280] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:29.421686 containerd[1578]: 2025-11-01 00:22:29.395 [WARNING][4280] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" HandleID="k8s-pod-network.926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Workload="ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0" Nov 1 00:22:29.421686 containerd[1578]: 2025-11-01 00:22:29.395 [INFO][4280] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" HandleID="k8s-pod-network.926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Workload="ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0" Nov 1 00:22:29.421686 containerd[1578]: 2025-11-01 00:22:29.398 [INFO][4280] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:29.421686 containerd[1578]: 2025-11-01 00:22:29.409 [INFO][4272] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Nov 1 00:22:29.456733 containerd[1578]: time="2025-11-01T00:22:29.454254692Z" level=info msg="TearDown network for sandbox \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\" successfully" Nov 1 00:22:29.456733 containerd[1578]: time="2025-11-01T00:22:29.454311041Z" level=info msg="StopPodSandbox for \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\" returns successfully" Nov 1 00:22:29.456983 containerd[1578]: time="2025-11-01T00:22:29.456756495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2zlcz,Uid:14a0cb3a-c17b-419c-80e4-76ffe3aff4c5,Namespace:calico-system,Attempt:1,}" Nov 1 00:22:29.476677 systemd[1]: run-netns-cni\x2dcd4bda56\x2d5945\x2d4144\x2de953\x2db58b8a548ed3.mount: Deactivated successfully. Nov 1 00:22:29.722868 kubelet[2664]: E1101 00:22:29.722805 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:29.731268 kubelet[2664]: E1101 00:22:29.730909 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d65b76bbf-shhvk" podUID="1172650d-8656-4c06-afa1-e156b3ef1286" Nov 1 00:22:29.803782 systemd-networkd[1227]: cali42609e27a27: Link UP Nov 1 00:22:29.804163 systemd-networkd[1227]: cali42609e27a27: Gained carrier Nov 1 00:22:29.841747 containerd[1578]: 2025-11-01 00:22:29.614 [INFO][4292] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0 goldmane-666569f655- calico-system 14a0cb3a-c17b-419c-80e4-76ffe3aff4c5 964 0 2025-11-01 00:22:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-f16f13e513 goldmane-666569f655-2zlcz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali42609e27a27 [] [] }} ContainerID="cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f" Namespace="calico-system" Pod="goldmane-666569f655-2zlcz" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-" Nov 1 00:22:29.841747 containerd[1578]: 2025-11-01 00:22:29.614 [INFO][4292] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f" Namespace="calico-system" Pod="goldmane-666569f655-2zlcz" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0" Nov 1 00:22:29.841747 containerd[1578]: 2025-11-01 00:22:29.683 [INFO][4318] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f" HandleID="k8s-pod-network.cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f" Workload="ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0" Nov 1 00:22:29.841747 containerd[1578]: 2025-11-01 00:22:29.684 [INFO][4318] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f" HandleID="k8s-pod-network.cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f" Workload="ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-f16f13e513", "pod":"goldmane-666569f655-2zlcz", "timestamp":"2025-11-01 00:22:29.683600812 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f16f13e513", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:29.841747 containerd[1578]: 2025-11-01 00:22:29.684 [INFO][4318] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:29.841747 containerd[1578]: 2025-11-01 00:22:29.684 [INFO][4318] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:29.841747 containerd[1578]: 2025-11-01 00:22:29.684 [INFO][4318] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f16f13e513' Nov 1 00:22:29.841747 containerd[1578]: 2025-11-01 00:22:29.699 [INFO][4318] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:29.841747 containerd[1578]: 2025-11-01 00:22:29.714 [INFO][4318] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:29.841747 containerd[1578]: 2025-11-01 00:22:29.728 [INFO][4318] ipam/ipam.go 511: Trying affinity for 192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:29.841747 containerd[1578]: 2025-11-01 00:22:29.734 [INFO][4318] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:29.841747 containerd[1578]: 2025-11-01 00:22:29.743 [INFO][4318] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:29.841747 containerd[1578]: 2025-11-01 00:22:29.743 [INFO][4318] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.17.192/26 handle="k8s-pod-network.cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:29.841747 containerd[1578]: 2025-11-01 00:22:29.752 [INFO][4318] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f Nov 1 00:22:29.841747 containerd[1578]: 2025-11-01 00:22:29.773 [INFO][4318] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.17.192/26 handle="k8s-pod-network.cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:29.841747 containerd[1578]: 2025-11-01 00:22:29.784 [INFO][4318] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.17.195/26] block=192.168.17.192/26 handle="k8s-pod-network.cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:29.841747 containerd[1578]: 2025-11-01 00:22:29.784 [INFO][4318] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.195/26] handle="k8s-pod-network.cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:29.841747 containerd[1578]: 2025-11-01 00:22:29.784 [INFO][4318] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:29.841747 containerd[1578]: 2025-11-01 00:22:29.785 [INFO][4318] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.17.195/26] IPv6=[] ContainerID="cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f" HandleID="k8s-pod-network.cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f" Workload="ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0" Nov 1 00:22:29.846616 containerd[1578]: 2025-11-01 00:22:29.790 [INFO][4292] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f" Namespace="calico-system" Pod="goldmane-666569f655-2zlcz" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"14a0cb3a-c17b-419c-80e4-76ffe3aff4c5", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"", Pod:"goldmane-666569f655-2zlcz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.17.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali42609e27a27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:29.846616 containerd[1578]: 2025-11-01 00:22:29.791 [INFO][4292] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.195/32] ContainerID="cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f" Namespace="calico-system" Pod="goldmane-666569f655-2zlcz" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0" Nov 1 00:22:29.846616 containerd[1578]: 2025-11-01 00:22:29.791 [INFO][4292] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42609e27a27 ContainerID="cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f" Namespace="calico-system" Pod="goldmane-666569f655-2zlcz" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0" Nov 1 00:22:29.846616 containerd[1578]: 2025-11-01 00:22:29.801 [INFO][4292] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f" Namespace="calico-system" Pod="goldmane-666569f655-2zlcz" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0" Nov 1 00:22:29.846616 containerd[1578]: 2025-11-01 00:22:29.810 [INFO][4292] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f" Namespace="calico-system" Pod="goldmane-666569f655-2zlcz" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"14a0cb3a-c17b-419c-80e4-76ffe3aff4c5", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f", Pod:"goldmane-666569f655-2zlcz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.17.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali42609e27a27", MAC:"96:9b:c0:57:da:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:29.846616 containerd[1578]: 2025-11-01 00:22:29.830 [INFO][4292] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f" Namespace="calico-system" Pod="goldmane-666569f655-2zlcz" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0" Nov 1 00:22:29.918730 containerd[1578]: time="2025-11-01T00:22:29.917901040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:29.918730 containerd[1578]: time="2025-11-01T00:22:29.918005764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:29.918730 containerd[1578]: time="2025-11-01T00:22:29.918024543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:29.921406 containerd[1578]: time="2025-11-01T00:22:29.920501469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:30.071468 containerd[1578]: time="2025-11-01T00:22:30.070649977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2zlcz,Uid:14a0cb3a-c17b-419c-80e4-76ffe3aff4c5,Namespace:calico-system,Attempt:1,} returns sandbox id \"cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f\"" Nov 1 00:22:30.080199 containerd[1578]: time="2025-11-01T00:22:30.080137954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:22:30.182937 containerd[1578]: time="2025-11-01T00:22:30.182120742Z" level=info msg="StopPodSandbox for \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\"" Nov 1 00:22:30.183362 containerd[1578]: time="2025-11-01T00:22:30.183318244Z" level=info msg="StopPodSandbox for \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\"" Nov 1 00:22:30.183973 containerd[1578]: time="2025-11-01T00:22:30.183935091Z" level=info msg="StopPodSandbox for \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\"" Nov 1 00:22:30.202104 systemd-networkd[1227]: cali18f1a722935: Gained IPv6LL Nov 1 00:22:30.411980 containerd[1578]: time="2025-11-01T00:22:30.411515661Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:30.414700 containerd[1578]: time="2025-11-01T00:22:30.414210899Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:22:30.414700 containerd[1578]: time="2025-11-01T00:22:30.414344507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:22:30.415458 kubelet[2664]: E1101 00:22:30.414958 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:22:30.415458 kubelet[2664]: E1101 00:22:30.415031 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:22:30.415458 kubelet[2664]: E1101 00:22:30.415233 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2cp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2zlcz_calico-system(14a0cb3a-c17b-419c-80e4-76ffe3aff4c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:30.415458 kubelet[2664]: E1101 00:22:30.417122 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2zlcz" podUID="14a0cb3a-c17b-419c-80e4-76ffe3aff4c5" Nov 1 00:22:30.613219 containerd[1578]: 2025-11-01 00:22:30.407 [INFO][4410] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Nov 1 00:22:30.613219 containerd[1578]: 2025-11-01 00:22:30.407 [INFO][4410] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" iface="eth0" netns="/var/run/netns/cni-4f83248d-df5d-ce63-4d7a-de14782917b2" Nov 1 00:22:30.613219 containerd[1578]: 2025-11-01 00:22:30.417 [INFO][4410] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" iface="eth0" netns="/var/run/netns/cni-4f83248d-df5d-ce63-4d7a-de14782917b2" Nov 1 00:22:30.613219 containerd[1578]: 2025-11-01 00:22:30.417 [INFO][4410] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" iface="eth0" netns="/var/run/netns/cni-4f83248d-df5d-ce63-4d7a-de14782917b2" Nov 1 00:22:30.613219 containerd[1578]: 2025-11-01 00:22:30.418 [INFO][4410] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Nov 1 00:22:30.613219 containerd[1578]: 2025-11-01 00:22:30.418 [INFO][4410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Nov 1 00:22:30.613219 containerd[1578]: 2025-11-01 00:22:30.543 [INFO][4429] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" HandleID="k8s-pod-network.cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0" Nov 1 00:22:30.613219 containerd[1578]: 2025-11-01 00:22:30.543 [INFO][4429] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:30.613219 containerd[1578]: 2025-11-01 00:22:30.543 [INFO][4429] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:30.613219 containerd[1578]: 2025-11-01 00:22:30.584 [WARNING][4429] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" HandleID="k8s-pod-network.cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0" Nov 1 00:22:30.613219 containerd[1578]: 2025-11-01 00:22:30.584 [INFO][4429] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" HandleID="k8s-pod-network.cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0" Nov 1 00:22:30.613219 containerd[1578]: 2025-11-01 00:22:30.592 [INFO][4429] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:30.613219 containerd[1578]: 2025-11-01 00:22:30.604 [INFO][4410] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Nov 1 00:22:30.617679 containerd[1578]: time="2025-11-01T00:22:30.614720572Z" level=info msg="TearDown network for sandbox \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\" successfully" Nov 1 00:22:30.617679 containerd[1578]: time="2025-11-01T00:22:30.614771003Z" level=info msg="StopPodSandbox for \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\" returns successfully" Nov 1 00:22:30.617777 kubelet[2664]: E1101 00:22:30.615248 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:30.622676 containerd[1578]: time="2025-11-01T00:22:30.621727820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fxl85,Uid:9637455a-d2d9-41ac-be89-aeef7331b819,Namespace:kube-system,Attempt:1,}" Nov 1 00:22:30.627293 systemd[1]: run-netns-cni\x2d4f83248d\x2ddf5d\x2dce63\x2d4d7a\x2dde14782917b2.mount: Deactivated successfully. Nov 1 00:22:30.670481 containerd[1578]: 2025-11-01 00:22:30.419 [INFO][4403] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Nov 1 00:22:30.670481 containerd[1578]: 2025-11-01 00:22:30.430 [INFO][4403] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" iface="eth0" netns="/var/run/netns/cni-dda392b6-e97e-3150-cab5-2b5d5bc14c6e" Nov 1 00:22:30.670481 containerd[1578]: 2025-11-01 00:22:30.437 [INFO][4403] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" iface="eth0" netns="/var/run/netns/cni-dda392b6-e97e-3150-cab5-2b5d5bc14c6e" Nov 1 00:22:30.670481 containerd[1578]: 2025-11-01 00:22:30.439 [INFO][4403] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" iface="eth0" netns="/var/run/netns/cni-dda392b6-e97e-3150-cab5-2b5d5bc14c6e" Nov 1 00:22:30.670481 containerd[1578]: 2025-11-01 00:22:30.441 [INFO][4403] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Nov 1 00:22:30.670481 containerd[1578]: 2025-11-01 00:22:30.442 [INFO][4403] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Nov 1 00:22:30.670481 containerd[1578]: 2025-11-01 00:22:30.558 [INFO][4438] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" HandleID="k8s-pod-network.8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0" Nov 1 00:22:30.670481 containerd[1578]: 2025-11-01 00:22:30.562 [INFO][4438] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:30.670481 containerd[1578]: 2025-11-01 00:22:30.592 [INFO][4438] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:30.670481 containerd[1578]: 2025-11-01 00:22:30.629 [WARNING][4438] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" HandleID="k8s-pod-network.8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0" Nov 1 00:22:30.670481 containerd[1578]: 2025-11-01 00:22:30.629 [INFO][4438] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" HandleID="k8s-pod-network.8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0" Nov 1 00:22:30.670481 containerd[1578]: 2025-11-01 00:22:30.638 [INFO][4438] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:30.670481 containerd[1578]: 2025-11-01 00:22:30.653 [INFO][4403] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Nov 1 00:22:30.670481 containerd[1578]: time="2025-11-01T00:22:30.670452807Z" level=info msg="TearDown network for sandbox \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\" successfully" Nov 1 00:22:30.675225 containerd[1578]: time="2025-11-01T00:22:30.670508448Z" level=info msg="StopPodSandbox for \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\" returns successfully" Nov 1 00:22:30.675264 kubelet[2664]: E1101 00:22:30.670986 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:30.686061 systemd[1]: run-netns-cni\x2ddda392b6\x2de97e\x2d3150\x2dcab5\x2d2b5d5bc14c6e.mount: Deactivated successfully. Nov 1 00:22:30.692408 containerd[1578]: time="2025-11-01T00:22:30.689738349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s6zgv,Uid:945dd47a-80ea-4932-9742-bcde28f179e6,Namespace:kube-system,Attempt:1,}" Nov 1 00:22:30.795756 containerd[1578]: 2025-11-01 00:22:30.387 [INFO][4411] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Nov 1 00:22:30.795756 containerd[1578]: 2025-11-01 00:22:30.388 [INFO][4411] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" iface="eth0" netns="/var/run/netns/cni-527de1b0-edac-1daa-0cb7-85dd16d32e5c" Nov 1 00:22:30.795756 containerd[1578]: 2025-11-01 00:22:30.392 [INFO][4411] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" iface="eth0" netns="/var/run/netns/cni-527de1b0-edac-1daa-0cb7-85dd16d32e5c" Nov 1 00:22:30.795756 containerd[1578]: 2025-11-01 00:22:30.394 [INFO][4411] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" iface="eth0" netns="/var/run/netns/cni-527de1b0-edac-1daa-0cb7-85dd16d32e5c" Nov 1 00:22:30.795756 containerd[1578]: 2025-11-01 00:22:30.395 [INFO][4411] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Nov 1 00:22:30.795756 containerd[1578]: 2025-11-01 00:22:30.395 [INFO][4411] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Nov 1 00:22:30.795756 containerd[1578]: 2025-11-01 00:22:30.566 [INFO][4426] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" HandleID="k8s-pod-network.32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0" Nov 1 00:22:30.795756 containerd[1578]: 2025-11-01 00:22:30.567 [INFO][4426] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:30.795756 containerd[1578]: 2025-11-01 00:22:30.644 [INFO][4426] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:30.795756 containerd[1578]: 2025-11-01 00:22:30.676 [WARNING][4426] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" HandleID="k8s-pod-network.32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0" Nov 1 00:22:30.795756 containerd[1578]: 2025-11-01 00:22:30.677 [INFO][4426] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" HandleID="k8s-pod-network.32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0" Nov 1 00:22:30.795756 containerd[1578]: 2025-11-01 00:22:30.702 [INFO][4426] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:30.795756 containerd[1578]: 2025-11-01 00:22:30.741 [INFO][4411] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Nov 1 00:22:30.796785 containerd[1578]: time="2025-11-01T00:22:30.796534982Z" level=info msg="TearDown network for sandbox \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\" successfully" Nov 1 00:22:30.796785 containerd[1578]: time="2025-11-01T00:22:30.796584668Z" level=info msg="StopPodSandbox for \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\" returns successfully" Nov 1 00:22:30.816166 containerd[1578]: time="2025-11-01T00:22:30.815880663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76d974b5c6-z26qx,Uid:70d7c9dc-5ae1-4150-b4ab-1e59c014a05a,Namespace:calico-system,Attempt:1,}" Nov 1 00:22:30.829210 kubelet[2664]: E1101 00:22:30.828764 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d65b76bbf-shhvk" podUID="1172650d-8656-4c06-afa1-e156b3ef1286" Nov 1 00:22:30.838111 kubelet[2664]: E1101 00:22:30.837704 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2zlcz" podUID="14a0cb3a-c17b-419c-80e4-76ffe3aff4c5" Nov 1 00:22:31.205930 containerd[1578]: time="2025-11-01T00:22:31.205648938Z" level=info msg="StopPodSandbox for \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\"" Nov 1 00:22:31.213732 containerd[1578]: time="2025-11-01T00:22:31.213240508Z" level=info msg="StopPodSandbox for \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\"" Nov 1 00:22:31.356748 systemd[1]: run-netns-cni\x2d527de1b0\x2dedac\x2d1daa\x2d0cb7\x2d85dd16d32e5c.mount: Deactivated successfully. Nov 1 00:22:31.412929 systemd-networkd[1227]: cali44587a2a6ab: Link UP Nov 1 00:22:31.427519 systemd-networkd[1227]: cali44587a2a6ab: Gained carrier Nov 1 00:22:31.491992 containerd[1578]: 2025-11-01 00:22:30.951 [INFO][4459] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0 coredns-668d6bf9bc- kube-system 945dd47a-80ea-4932-9742-bcde28f179e6 989 0 2025-11-01 00:21:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-f16f13e513 coredns-668d6bf9bc-s6zgv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali44587a2a6ab [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6zgv" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-" Nov 1 00:22:31.491992 containerd[1578]: 2025-11-01 00:22:30.951 [INFO][4459] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6zgv" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0" Nov 1 00:22:31.491992 containerd[1578]: 2025-11-01 00:22:31.137 [INFO][4489] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc" HandleID="k8s-pod-network.b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0" Nov 1 00:22:31.491992 containerd[1578]: 2025-11-01 00:22:31.140 [INFO][4489] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc" HandleID="k8s-pod-network.b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003761d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-f16f13e513", "pod":"coredns-668d6bf9bc-s6zgv", "timestamp":"2025-11-01 00:22:31.137464186 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f16f13e513", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:31.491992 containerd[1578]: 2025-11-01 00:22:31.141 [INFO][4489] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:31.491992 containerd[1578]: 2025-11-01 00:22:31.141 [INFO][4489] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:31.491992 containerd[1578]: 2025-11-01 00:22:31.141 [INFO][4489] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f16f13e513' Nov 1 00:22:31.491992 containerd[1578]: 2025-11-01 00:22:31.185 [INFO][4489] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:31.491992 containerd[1578]: 2025-11-01 00:22:31.203 [INFO][4489] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:31.491992 containerd[1578]: 2025-11-01 00:22:31.247 [INFO][4489] ipam/ipam.go 511: Trying affinity for 192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:31.491992 containerd[1578]: 2025-11-01 00:22:31.259 [INFO][4489] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:31.491992 containerd[1578]: 2025-11-01 00:22:31.304 [INFO][4489] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:31.491992 containerd[1578]: 2025-11-01 00:22:31.304 [INFO][4489] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.17.192/26 handle="k8s-pod-network.b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:31.491992 containerd[1578]: 2025-11-01 00:22:31.310 [INFO][4489] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc Nov 1 00:22:31.491992 containerd[1578]: 2025-11-01 00:22:31.328 [INFO][4489] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.17.192/26 handle="k8s-pod-network.b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:31.491992 containerd[1578]: 2025-11-01 00:22:31.347 [INFO][4489] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.17.196/26] block=192.168.17.192/26 handle="k8s-pod-network.b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:31.491992 containerd[1578]: 2025-11-01 00:22:31.348 [INFO][4489] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.196/26] handle="k8s-pod-network.b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:31.491992 containerd[1578]: 2025-11-01 00:22:31.348 [INFO][4489] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:31.491992 containerd[1578]: 2025-11-01 00:22:31.348 [INFO][4489] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.17.196/26] IPv6=[] ContainerID="b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc" HandleID="k8s-pod-network.b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0" Nov 1 00:22:31.492718 containerd[1578]: 2025-11-01 00:22:31.361 [INFO][4459] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6zgv" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"945dd47a-80ea-4932-9742-bcde28f179e6", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"", Pod:"coredns-668d6bf9bc-s6zgv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44587a2a6ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:31.492718 containerd[1578]: 2025-11-01 00:22:31.380 [INFO][4459] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.196/32] ContainerID="b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6zgv" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0" Nov 1 00:22:31.492718 containerd[1578]: 2025-11-01 00:22:31.380 [INFO][4459] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44587a2a6ab ContainerID="b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6zgv" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0" Nov 1 00:22:31.492718 containerd[1578]: 2025-11-01 00:22:31.433 [INFO][4459] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6zgv" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0" Nov 1 00:22:31.492718 containerd[1578]: 2025-11-01 00:22:31.435 [INFO][4459] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6zgv" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"945dd47a-80ea-4932-9742-bcde28f179e6", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc", Pod:"coredns-668d6bf9bc-s6zgv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44587a2a6ab", MAC:"b2:86:41:4c:53:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:31.492718 containerd[1578]: 2025-11-01 00:22:31.472 [INFO][4459] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6zgv" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0" Nov 1 00:22:31.618504 systemd-networkd[1227]: cali26b32575278: Link UP Nov 1 00:22:31.620016 systemd-networkd[1227]: cali26b32575278: Gained carrier Nov 1 00:22:31.659116 containerd[1578]: time="2025-11-01T00:22:31.658140219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:31.659116 containerd[1578]: time="2025-11-01T00:22:31.658207799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:31.659116 containerd[1578]: time="2025-11-01T00:22:31.658219195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:31.659116 containerd[1578]: time="2025-11-01T00:22:31.658340196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:31.754832 containerd[1578]: 2025-11-01 00:22:30.959 [INFO][4449] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0 coredns-668d6bf9bc- kube-system 9637455a-d2d9-41ac-be89-aeef7331b819 988 0 2025-11-01 00:21:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-f16f13e513 coredns-668d6bf9bc-fxl85 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali26b32575278 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01" Namespace="kube-system" Pod="coredns-668d6bf9bc-fxl85" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-" Nov 1 00:22:31.754832 containerd[1578]: 2025-11-01 00:22:30.960 [INFO][4449] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01" Namespace="kube-system" Pod="coredns-668d6bf9bc-fxl85" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0" Nov 1 00:22:31.754832 containerd[1578]: 2025-11-01 00:22:31.260 [INFO][4488] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01" HandleID="k8s-pod-network.392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0" Nov 1 00:22:31.754832 containerd[1578]: 2025-11-01 00:22:31.260 [INFO][4488] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01" HandleID="k8s-pod-network.392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039b0e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-f16f13e513", "pod":"coredns-668d6bf9bc-fxl85", "timestamp":"2025-11-01 00:22:31.260344657 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f16f13e513", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:31.754832 containerd[1578]: 2025-11-01 00:22:31.260 [INFO][4488] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:31.754832 containerd[1578]: 2025-11-01 00:22:31.361 [INFO][4488] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:31.754832 containerd[1578]: 2025-11-01 00:22:31.361 [INFO][4488] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f16f13e513' Nov 1 00:22:31.754832 containerd[1578]: 2025-11-01 00:22:31.394 [INFO][4488] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:31.754832 containerd[1578]: 2025-11-01 00:22:31.408 [INFO][4488] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:31.754832 containerd[1578]: 2025-11-01 00:22:31.468 [INFO][4488] ipam/ipam.go 511: Trying affinity for 192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:31.754832 containerd[1578]: 2025-11-01 00:22:31.476 [INFO][4488] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:31.754832 containerd[1578]: 2025-11-01 00:22:31.483 [INFO][4488] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:31.754832 containerd[1578]: 2025-11-01 00:22:31.483 [INFO][4488] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.17.192/26 handle="k8s-pod-network.392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:31.754832 containerd[1578]: 2025-11-01 00:22:31.489 [INFO][4488] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01 Nov 1 00:22:31.754832 containerd[1578]: 2025-11-01 00:22:31.514 [INFO][4488] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.17.192/26 handle="k8s-pod-network.392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:31.754832 containerd[1578]: 2025-11-01 00:22:31.548 [INFO][4488] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.17.197/26] block=192.168.17.192/26 handle="k8s-pod-network.392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:31.754832 containerd[1578]: 2025-11-01 00:22:31.548 [INFO][4488] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.197/26] handle="k8s-pod-network.392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:31.754832 containerd[1578]: 2025-11-01 00:22:31.549 [INFO][4488] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:31.754832 containerd[1578]: 2025-11-01 00:22:31.549 [INFO][4488] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.17.197/26] IPv6=[] ContainerID="392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01" HandleID="k8s-pod-network.392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0" Nov 1 00:22:31.757830 containerd[1578]: 2025-11-01 00:22:31.589 [INFO][4449] cni-plugin/k8s.go 418: Populated endpoint ContainerID="392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01" Namespace="kube-system" Pod="coredns-668d6bf9bc-fxl85" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9637455a-d2d9-41ac-be89-aeef7331b819", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"", Pod:"coredns-668d6bf9bc-fxl85", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26b32575278", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:31.757830 containerd[1578]: 2025-11-01 00:22:31.592 [INFO][4449] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.197/32] ContainerID="392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01" Namespace="kube-system" Pod="coredns-668d6bf9bc-fxl85" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0" Nov 1 00:22:31.757830 containerd[1578]: 2025-11-01 00:22:31.592 [INFO][4449] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26b32575278 ContainerID="392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01" Namespace="kube-system" Pod="coredns-668d6bf9bc-fxl85" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0" Nov 1 00:22:31.757830 containerd[1578]: 2025-11-01 00:22:31.636 [INFO][4449] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01" Namespace="kube-system" Pod="coredns-668d6bf9bc-fxl85" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0" Nov 1 00:22:31.757830 containerd[1578]: 2025-11-01 00:22:31.639 [INFO][4449] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01" Namespace="kube-system" Pod="coredns-668d6bf9bc-fxl85" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9637455a-d2d9-41ac-be89-aeef7331b819", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01", Pod:"coredns-668d6bf9bc-fxl85", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26b32575278", MAC:"ea:0f:a3:d1:f4:c9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:31.757830 containerd[1578]: 2025-11-01 00:22:31.719 [INFO][4449] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01" Namespace="kube-system" Pod="coredns-668d6bf9bc-fxl85" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0" Nov 1 00:22:31.864122 systemd-networkd[1227]: cali42609e27a27: Gained IPv6LL Nov 1 00:22:31.940237 systemd-networkd[1227]: cali8f6a620a319: Link UP Nov 1 00:22:31.951516 systemd-networkd[1227]: cali8f6a620a319: Gained carrier Nov 1 00:22:31.987885 kubelet[2664]: E1101 00:22:31.987837 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2zlcz" podUID="14a0cb3a-c17b-419c-80e4-76ffe3aff4c5" Nov 1 00:22:32.021920 containerd[1578]: 2025-11-01 00:22:31.518 [INFO][4526] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Nov 1 00:22:32.021920 containerd[1578]: 2025-11-01 00:22:31.521 [INFO][4526] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" iface="eth0" netns="/var/run/netns/cni-5d2f5590-1f21-6d7f-3ee8-9303bf35e8ca" Nov 1 00:22:32.021920 containerd[1578]: 2025-11-01 00:22:31.531 [INFO][4526] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" iface="eth0" netns="/var/run/netns/cni-5d2f5590-1f21-6d7f-3ee8-9303bf35e8ca" Nov 1 00:22:32.021920 containerd[1578]: 2025-11-01 00:22:31.534 [INFO][4526] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" iface="eth0" netns="/var/run/netns/cni-5d2f5590-1f21-6d7f-3ee8-9303bf35e8ca" Nov 1 00:22:32.021920 containerd[1578]: 2025-11-01 00:22:31.535 [INFO][4526] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Nov 1 00:22:32.021920 containerd[1578]: 2025-11-01 00:22:31.535 [INFO][4526] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Nov 1 00:22:32.021920 containerd[1578]: 2025-11-01 00:22:31.743 [INFO][4551] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" HandleID="k8s-pod-network.f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Workload="ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0" Nov 1 00:22:32.021920 containerd[1578]: 2025-11-01 00:22:31.777 [INFO][4551] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:32.021920 containerd[1578]: 2025-11-01 00:22:31.850 [INFO][4551] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:32.021920 containerd[1578]: 2025-11-01 00:22:31.905 [WARNING][4551] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" HandleID="k8s-pod-network.f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Workload="ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0" Nov 1 00:22:32.021920 containerd[1578]: 2025-11-01 00:22:31.905 [INFO][4551] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" HandleID="k8s-pod-network.f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Workload="ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0" Nov 1 00:22:32.021920 containerd[1578]: 2025-11-01 00:22:31.925 [INFO][4551] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:32.021920 containerd[1578]: 2025-11-01 00:22:31.975 [INFO][4526] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Nov 1 00:22:32.038170 systemd[1]: run-netns-cni\x2d5d2f5590\x2d1f21\x2d6d7f\x2d3ee8\x2d9303bf35e8ca.mount: Deactivated successfully. Nov 1 00:22:32.049635 containerd[1578]: time="2025-11-01T00:22:32.045910833Z" level=info msg="TearDown network for sandbox \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\" successfully" Nov 1 00:22:32.050019 containerd[1578]: time="2025-11-01T00:22:32.045967261Z" level=info msg="StopPodSandbox for \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\" returns successfully" Nov 1 00:22:32.068138 containerd[1578]: time="2025-11-01T00:22:32.068065765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mglhw,Uid:c76f0dc0-2591-4062-8741-1604477875d5,Namespace:calico-system,Attempt:1,}" Nov 1 00:22:32.124559 containerd[1578]: 2025-11-01 00:22:30.996 [INFO][4468] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0 calico-kube-controllers-76d974b5c6- calico-system 70d7c9dc-5ae1-4150-b4ab-1e59c014a05a 987 0 2025-11-01 00:22:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76d974b5c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-f16f13e513 calico-kube-controllers-76d974b5c6-z26qx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8f6a620a319 [] [] }} ContainerID="35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7" Namespace="calico-system" Pod="calico-kube-controllers-76d974b5c6-z26qx" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-" Nov 1 00:22:32.124559 containerd[1578]: 2025-11-01 00:22:30.996 [INFO][4468] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7" Namespace="calico-system" Pod="calico-kube-controllers-76d974b5c6-z26qx" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0" Nov 1 00:22:32.124559 containerd[1578]: 2025-11-01 00:22:31.290 [INFO][4497] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7" HandleID="k8s-pod-network.35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0" Nov 1 00:22:32.124559 containerd[1578]: 2025-11-01 00:22:31.290 [INFO][4497] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7" HandleID="k8s-pod-network.35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000329050), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-f16f13e513", "pod":"calico-kube-controllers-76d974b5c6-z26qx", "timestamp":"2025-11-01 00:22:31.29050126 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f16f13e513", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:32.124559 containerd[1578]: 2025-11-01 00:22:31.290 [INFO][4497] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:32.124559 containerd[1578]: 2025-11-01 00:22:31.551 [INFO][4497] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:32.124559 containerd[1578]: 2025-11-01 00:22:31.551 [INFO][4497] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f16f13e513' Nov 1 00:22:32.124559 containerd[1578]: 2025-11-01 00:22:31.603 [INFO][4497] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:32.124559 containerd[1578]: 2025-11-01 00:22:31.645 [INFO][4497] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:32.124559 containerd[1578]: 2025-11-01 00:22:31.708 [INFO][4497] ipam/ipam.go 511: Trying affinity for 192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:32.124559 containerd[1578]: 2025-11-01 00:22:31.736 [INFO][4497] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:32.124559 containerd[1578]: 2025-11-01 00:22:31.777 [INFO][4497] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:32.124559 containerd[1578]: 2025-11-01 00:22:31.778 [INFO][4497] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.17.192/26 handle="k8s-pod-network.35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:32.124559 containerd[1578]: 2025-11-01 00:22:31.799 [INFO][4497] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7 Nov 1 00:22:32.124559 containerd[1578]: 2025-11-01 00:22:31.820 [INFO][4497] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.17.192/26 handle="k8s-pod-network.35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:32.124559 containerd[1578]: 2025-11-01 00:22:31.847 [INFO][4497] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.17.198/26] block=192.168.17.192/26 handle="k8s-pod-network.35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:32.124559 containerd[1578]: 2025-11-01 00:22:31.847 [INFO][4497] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.198/26] handle="k8s-pod-network.35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:32.124559 containerd[1578]: 2025-11-01 00:22:31.847 [INFO][4497] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:32.124559 containerd[1578]: 2025-11-01 00:22:31.847 [INFO][4497] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.17.198/26] IPv6=[] ContainerID="35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7" HandleID="k8s-pod-network.35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0" Nov 1 00:22:32.128935 containerd[1578]: 2025-11-01 00:22:31.906 [INFO][4468] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7" Namespace="calico-system" Pod="calico-kube-controllers-76d974b5c6-z26qx" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0", GenerateName:"calico-kube-controllers-76d974b5c6-", Namespace:"calico-system", SelfLink:"", UID:"70d7c9dc-5ae1-4150-b4ab-1e59c014a05a", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76d974b5c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"", Pod:"calico-kube-controllers-76d974b5c6-z26qx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8f6a620a319", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:32.128935 containerd[1578]: 2025-11-01 00:22:31.906 [INFO][4468] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.198/32] ContainerID="35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7" Namespace="calico-system" Pod="calico-kube-controllers-76d974b5c6-z26qx" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0" Nov 1 00:22:32.128935 containerd[1578]: 2025-11-01 00:22:31.907 [INFO][4468] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f6a620a319 ContainerID="35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7" Namespace="calico-system" Pod="calico-kube-controllers-76d974b5c6-z26qx" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0" Nov 1 00:22:32.128935 containerd[1578]: 2025-11-01 00:22:31.964 [INFO][4468] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7" Namespace="calico-system" Pod="calico-kube-controllers-76d974b5c6-z26qx" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0" Nov 1 00:22:32.128935 containerd[1578]: 2025-11-01 00:22:31.994 [INFO][4468] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7" Namespace="calico-system" Pod="calico-kube-controllers-76d974b5c6-z26qx" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0", GenerateName:"calico-kube-controllers-76d974b5c6-", Namespace:"calico-system", SelfLink:"", UID:"70d7c9dc-5ae1-4150-b4ab-1e59c014a05a", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76d974b5c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7", Pod:"calico-kube-controllers-76d974b5c6-z26qx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8f6a620a319", MAC:"be:cc:64:7b:50:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:32.128935 containerd[1578]: 2025-11-01 00:22:32.097 [INFO][4468] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7" Namespace="calico-system" Pod="calico-kube-controllers-76d974b5c6-z26qx" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0" Nov 1 00:22:32.313796 containerd[1578]: time="2025-11-01T00:22:32.311062425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s6zgv,Uid:945dd47a-80ea-4932-9742-bcde28f179e6,Namespace:kube-system,Attempt:1,} returns sandbox id \"b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc\"" Nov 1 00:22:32.317449 kubelet[2664]: E1101 00:22:32.315388 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:32.330250 containerd[1578]: 2025-11-01 00:22:31.586 [INFO][4525] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Nov 1 00:22:32.330250 containerd[1578]: 2025-11-01 00:22:31.590 [INFO][4525] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" iface="eth0" netns="/var/run/netns/cni-d051f47b-5fac-a9be-c830-d0df602fc651" Nov 1 00:22:32.330250 containerd[1578]: 2025-11-01 00:22:31.590 [INFO][4525] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" iface="eth0" netns="/var/run/netns/cni-d051f47b-5fac-a9be-c830-d0df602fc651" Nov 1 00:22:32.330250 containerd[1578]: 2025-11-01 00:22:31.591 [INFO][4525] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" iface="eth0" netns="/var/run/netns/cni-d051f47b-5fac-a9be-c830-d0df602fc651" Nov 1 00:22:32.330250 containerd[1578]: 2025-11-01 00:22:31.591 [INFO][4525] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Nov 1 00:22:32.330250 containerd[1578]: 2025-11-01 00:22:31.593 [INFO][4525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Nov 1 00:22:32.330250 containerd[1578]: 2025-11-01 00:22:32.176 [INFO][4569] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" HandleID="k8s-pod-network.c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0" Nov 1 00:22:32.330250 containerd[1578]: 2025-11-01 00:22:32.177 [INFO][4569] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:32.330250 containerd[1578]: 2025-11-01 00:22:32.178 [INFO][4569] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:32.330250 containerd[1578]: 2025-11-01 00:22:32.237 [WARNING][4569] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" HandleID="k8s-pod-network.c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0" Nov 1 00:22:32.330250 containerd[1578]: 2025-11-01 00:22:32.237 [INFO][4569] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" HandleID="k8s-pod-network.c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0" Nov 1 00:22:32.330250 containerd[1578]: 2025-11-01 00:22:32.253 [INFO][4569] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:32.330250 containerd[1578]: 2025-11-01 00:22:32.277 [INFO][4525] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Nov 1 00:22:32.345175 containerd[1578]: time="2025-11-01T00:22:32.345082182Z" level=info msg="TearDown network for sandbox \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\" successfully" Nov 1 00:22:32.345175 containerd[1578]: time="2025-11-01T00:22:32.345163171Z" level=info msg="StopPodSandbox for \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\" returns successfully" Nov 1 00:22:32.358010 containerd[1578]: time="2025-11-01T00:22:32.357620370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d65b76bbf-mht9v,Uid:fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:22:32.365414 systemd[1]: run-netns-cni\x2dd051f47b\x2d5fac\x2da9be\x2dc830\x2dd0df602fc651.mount: Deactivated successfully. Nov 1 00:22:32.401464 containerd[1578]: time="2025-11-01T00:22:32.399991384Z" level=info msg="CreateContainer within sandbox \"b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:22:32.401464 containerd[1578]: time="2025-11-01T00:22:32.399715634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:32.401464 containerd[1578]: time="2025-11-01T00:22:32.399779904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:32.401464 containerd[1578]: time="2025-11-01T00:22:32.399791422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:32.401464 containerd[1578]: time="2025-11-01T00:22:32.399896912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:32.417619 containerd[1578]: time="2025-11-01T00:22:32.406583672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:32.417619 containerd[1578]: time="2025-11-01T00:22:32.406652597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:32.417619 containerd[1578]: time="2025-11-01T00:22:32.406700989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:32.417619 containerd[1578]: time="2025-11-01T00:22:32.406998023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:32.529317 containerd[1578]: time="2025-11-01T00:22:32.529268207Z" level=info msg="CreateContainer within sandbox \"b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"99ffa51ef234d35e3277b6823a55c8131f6b3b07be3d9ea9df5c2d5b8c3e73e8\"" Nov 1 00:22:32.538022 containerd[1578]: time="2025-11-01T00:22:32.535311483Z" level=info msg="StartContainer for \"99ffa51ef234d35e3277b6823a55c8131f6b3b07be3d9ea9df5c2d5b8c3e73e8\"" Nov 1 00:22:32.633609 systemd-networkd[1227]: cali44587a2a6ab: Gained IPv6LL Nov 1 00:22:32.715826 containerd[1578]: time="2025-11-01T00:22:32.714444387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fxl85,Uid:9637455a-d2d9-41ac-be89-aeef7331b819,Namespace:kube-system,Attempt:1,} returns sandbox id \"392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01\"" Nov 1 00:22:32.722768 kubelet[2664]: E1101 00:22:32.718905 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:32.752311 containerd[1578]: time="2025-11-01T00:22:32.752235971Z" level=info msg="CreateContainer within sandbox \"392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:22:32.816585 containerd[1578]: time="2025-11-01T00:22:32.816507217Z" level=info msg="CreateContainer within sandbox \"392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6488324d27313ba7b6464e7a05ae5fb6d9bd0d1f6da8908f8b0a9b8bfb9bceb6\"" Nov 1 00:22:32.823658 containerd[1578]: time="2025-11-01T00:22:32.818764346Z" level=info msg="StartContainer for \"6488324d27313ba7b6464e7a05ae5fb6d9bd0d1f6da8908f8b0a9b8bfb9bceb6\"" Nov 1 00:22:33.015576 systemd-networkd[1227]: cali8f6a620a319: Gained IPv6LL Nov 1 00:22:33.114534 containerd[1578]: time="2025-11-01T00:22:33.114485835Z" level=info msg="StartContainer for \"99ffa51ef234d35e3277b6823a55c8131f6b3b07be3d9ea9df5c2d5b8c3e73e8\" returns successfully" Nov 1 00:22:33.119500 containerd[1578]: time="2025-11-01T00:22:33.118746374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76d974b5c6-z26qx,Uid:70d7c9dc-5ae1-4150-b4ab-1e59c014a05a,Namespace:calico-system,Attempt:1,} returns sandbox id \"35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7\"" Nov 1 00:22:33.121706 systemd-networkd[1227]: cali0b7fff65878: Link UP Nov 1 00:22:33.122106 systemd-networkd[1227]: cali0b7fff65878: Gained carrier Nov 1 00:22:33.130303 containerd[1578]: time="2025-11-01T00:22:33.124197422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:22:33.169996 containerd[1578]: time="2025-11-01T00:22:33.169325197Z" level=info msg="StartContainer for \"6488324d27313ba7b6464e7a05ae5fb6d9bd0d1f6da8908f8b0a9b8bfb9bceb6\" returns successfully" Nov 1 00:22:33.187570 containerd[1578]: 2025-11-01 00:22:32.548 [INFO][4606] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0 csi-node-driver- calico-system c76f0dc0-2591-4062-8741-1604477875d5 1011 0 2025-11-01 00:22:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-f16f13e513 csi-node-driver-mglhw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0b7fff65878 [] [] }} ContainerID="89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929" Namespace="calico-system" Pod="csi-node-driver-mglhw" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-" Nov 1 00:22:33.187570 containerd[1578]: 2025-11-01 00:22:32.556 [INFO][4606] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929" Namespace="calico-system" Pod="csi-node-driver-mglhw" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0" Nov 1 00:22:33.187570 containerd[1578]: 2025-11-01 00:22:32.845 [INFO][4713] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929" HandleID="k8s-pod-network.89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929" Workload="ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0" Nov 1 00:22:33.187570 containerd[1578]: 2025-11-01 00:22:32.845 [INFO][4713] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929" HandleID="k8s-pod-network.89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929" Workload="ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-f16f13e513", "pod":"csi-node-driver-mglhw", "timestamp":"2025-11-01 00:22:32.84549186 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f16f13e513", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:33.187570 containerd[1578]: 2025-11-01 00:22:32.845 [INFO][4713] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:33.187570 containerd[1578]: 2025-11-01 00:22:32.846 [INFO][4713] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:33.187570 containerd[1578]: 2025-11-01 00:22:32.846 [INFO][4713] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f16f13e513' Nov 1 00:22:33.187570 containerd[1578]: 2025-11-01 00:22:32.895 [INFO][4713] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:33.187570 containerd[1578]: 2025-11-01 00:22:32.926 [INFO][4713] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:33.187570 containerd[1578]: 2025-11-01 00:22:32.954 [INFO][4713] ipam/ipam.go 511: Trying affinity for 192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:33.187570 containerd[1578]: 2025-11-01 00:22:32.961 [INFO][4713] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:33.187570 containerd[1578]: 2025-11-01 00:22:32.980 [INFO][4713] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:33.187570 containerd[1578]: 2025-11-01 00:22:32.980 [INFO][4713] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.17.192/26 handle="k8s-pod-network.89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:33.187570 containerd[1578]: 2025-11-01 00:22:32.985 [INFO][4713] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929 Nov 1 00:22:33.187570 containerd[1578]: 2025-11-01 00:22:33.011 [INFO][4713] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.17.192/26 handle="k8s-pod-network.89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:33.187570 containerd[1578]: 2025-11-01 00:22:33.055 [INFO][4713] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.17.199/26] block=192.168.17.192/26 handle="k8s-pod-network.89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:33.187570 containerd[1578]: 2025-11-01 00:22:33.056 [INFO][4713] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.199/26] handle="k8s-pod-network.89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:33.187570 containerd[1578]: 2025-11-01 00:22:33.057 [INFO][4713] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:33.187570 containerd[1578]: 2025-11-01 00:22:33.057 [INFO][4713] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.17.199/26] IPv6=[] ContainerID="89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929" HandleID="k8s-pod-network.89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929" Workload="ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0" Nov 1 00:22:33.190986 containerd[1578]: 2025-11-01 00:22:33.081 [INFO][4606] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929" Namespace="calico-system" Pod="csi-node-driver-mglhw" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c76f0dc0-2591-4062-8741-1604477875d5", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"", Pod:"csi-node-driver-mglhw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b7fff65878", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:33.190986 containerd[1578]: 2025-11-01 00:22:33.083 [INFO][4606] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.199/32] ContainerID="89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929" Namespace="calico-system" Pod="csi-node-driver-mglhw" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0" Nov 1 00:22:33.190986 containerd[1578]: 2025-11-01 00:22:33.083 [INFO][4606] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b7fff65878 ContainerID="89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929" Namespace="calico-system" Pod="csi-node-driver-mglhw" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0" Nov 1 00:22:33.190986 containerd[1578]: 2025-11-01 00:22:33.121 [INFO][4606] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929" Namespace="calico-system" Pod="csi-node-driver-mglhw" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0" Nov 1 00:22:33.190986 containerd[1578]: 2025-11-01 00:22:33.136 [INFO][4606] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929" Namespace="calico-system" Pod="csi-node-driver-mglhw" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c76f0dc0-2591-4062-8741-1604477875d5", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929", Pod:"csi-node-driver-mglhw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b7fff65878", MAC:"42:7c:6f:05:b0:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:33.190986 containerd[1578]: 2025-11-01 00:22:33.167 [INFO][4606] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929" Namespace="calico-system" Pod="csi-node-driver-mglhw" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0" Nov 1 00:22:33.271211 systemd-networkd[1227]: cali26b32575278: Gained IPv6LL Nov 1 00:22:33.298717 containerd[1578]: time="2025-11-01T00:22:33.296715946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:33.298717 containerd[1578]: time="2025-11-01T00:22:33.296795733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:33.298717 containerd[1578]: time="2025-11-01T00:22:33.296819689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:33.298717 containerd[1578]: time="2025-11-01T00:22:33.296949529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:33.341475 systemd-networkd[1227]: cali65b5de9487f: Link UP Nov 1 00:22:33.347924 systemd-networkd[1227]: cali65b5de9487f: Gained carrier Nov 1 00:22:33.418796 containerd[1578]: 2025-11-01 00:22:32.835 [INFO][4682] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0 calico-apiserver-7d65b76bbf- calico-apiserver fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b 1014 0 2025-11-01 00:21:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d65b76bbf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-f16f13e513 calico-apiserver-7d65b76bbf-mht9v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali65b5de9487f [] [] }} ContainerID="0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51" Namespace="calico-apiserver" Pod="calico-apiserver-7d65b76bbf-mht9v" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-" Nov 1 00:22:33.418796 containerd[1578]: 2025-11-01 00:22:32.835 [INFO][4682] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51" Namespace="calico-apiserver" Pod="calico-apiserver-7d65b76bbf-mht9v" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0" Nov 1 00:22:33.418796 containerd[1578]: 2025-11-01 00:22:33.194 [INFO][4773] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51" HandleID="k8s-pod-network.0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0" Nov 1 00:22:33.418796 containerd[1578]: 2025-11-01 00:22:33.195 [INFO][4773] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51" HandleID="k8s-pod-network.0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-f16f13e513", "pod":"calico-apiserver-7d65b76bbf-mht9v", "timestamp":"2025-11-01 00:22:33.19467729 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f16f13e513", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:33.418796 containerd[1578]: 2025-11-01 00:22:33.195 [INFO][4773] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:33.418796 containerd[1578]: 2025-11-01 00:22:33.196 [INFO][4773] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:33.418796 containerd[1578]: 2025-11-01 00:22:33.196 [INFO][4773] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f16f13e513' Nov 1 00:22:33.418796 containerd[1578]: 2025-11-01 00:22:33.225 [INFO][4773] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:33.418796 containerd[1578]: 2025-11-01 00:22:33.246 [INFO][4773] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:33.418796 containerd[1578]: 2025-11-01 00:22:33.259 [INFO][4773] ipam/ipam.go 511: Trying affinity for 192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:33.418796 containerd[1578]: 2025-11-01 00:22:33.264 [INFO][4773] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:33.418796 containerd[1578]: 2025-11-01 00:22:33.270 [INFO][4773] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.192/26 host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:33.418796 containerd[1578]: 2025-11-01 00:22:33.274 [INFO][4773] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.17.192/26 handle="k8s-pod-network.0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:33.418796 containerd[1578]: 2025-11-01 00:22:33.280 [INFO][4773] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51 Nov 1 00:22:33.418796 containerd[1578]: 2025-11-01 00:22:33.290 [INFO][4773] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.17.192/26 handle="k8s-pod-network.0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:33.418796 containerd[1578]: 2025-11-01 00:22:33.309 [INFO][4773] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.17.200/26] block=192.168.17.192/26 handle="k8s-pod-network.0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:33.418796 containerd[1578]: 2025-11-01 00:22:33.310 [INFO][4773] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.200/26] handle="k8s-pod-network.0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51" host="ci-4081.3.6-n-f16f13e513" Nov 1 00:22:33.418796 containerd[1578]: 2025-11-01 00:22:33.310 [INFO][4773] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:33.418796 containerd[1578]: 2025-11-01 00:22:33.310 [INFO][4773] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.17.200/26] IPv6=[] ContainerID="0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51" HandleID="k8s-pod-network.0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0" Nov 1 00:22:33.419574 containerd[1578]: 2025-11-01 00:22:33.326 [INFO][4682] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51" Namespace="calico-apiserver" Pod="calico-apiserver-7d65b76bbf-mht9v" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0", GenerateName:"calico-apiserver-7d65b76bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d65b76bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"", Pod:"calico-apiserver-7d65b76bbf-mht9v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65b5de9487f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:33.419574 containerd[1578]: 2025-11-01 00:22:33.326 [INFO][4682] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.200/32] ContainerID="0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51" Namespace="calico-apiserver" Pod="calico-apiserver-7d65b76bbf-mht9v" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0" Nov 1 00:22:33.419574 containerd[1578]: 2025-11-01 00:22:33.326 [INFO][4682] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65b5de9487f ContainerID="0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51" Namespace="calico-apiserver" Pod="calico-apiserver-7d65b76bbf-mht9v" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0" Nov 1 00:22:33.419574 containerd[1578]: 2025-11-01 00:22:33.355 [INFO][4682] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51" Namespace="calico-apiserver" Pod="calico-apiserver-7d65b76bbf-mht9v" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0" Nov 1 00:22:33.419574 containerd[1578]: 2025-11-01 00:22:33.363 [INFO][4682] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51" Namespace="calico-apiserver" Pod="calico-apiserver-7d65b76bbf-mht9v" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0", GenerateName:"calico-apiserver-7d65b76bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d65b76bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51", Pod:"calico-apiserver-7d65b76bbf-mht9v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65b5de9487f", MAC:"fe:63:3f:76:66:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:33.419574 containerd[1578]: 2025-11-01 00:22:33.405 [INFO][4682] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51" Namespace="calico-apiserver" Pod="calico-apiserver-7d65b76bbf-mht9v" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0" Nov 1 00:22:33.483984 containerd[1578]: time="2025-11-01T00:22:33.482519562Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:33.485645 containerd[1578]: time="2025-11-01T00:22:33.485594254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:22:33.486410 containerd[1578]: time="2025-11-01T00:22:33.485780762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:22:33.487104 kubelet[2664]: E1101 00:22:33.487047 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:22:33.489490 kubelet[2664]: E1101 00:22:33.487286 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:22:33.489490 kubelet[2664]: E1101 00:22:33.487476 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gwswh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76d974b5c6-z26qx_calico-system(70d7c9dc-5ae1-4150-b4ab-1e59c014a05a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:33.490059 kubelet[2664]: E1101 00:22:33.490019 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76d974b5c6-z26qx" podUID="70d7c9dc-5ae1-4150-b4ab-1e59c014a05a" Nov 1 00:22:33.506794 containerd[1578]: time="2025-11-01T00:22:33.506276971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:33.513112 containerd[1578]: time="2025-11-01T00:22:33.508615831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:33.513112 containerd[1578]: time="2025-11-01T00:22:33.508658951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:33.513112 containerd[1578]: time="2025-11-01T00:22:33.508836029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:33.587995 containerd[1578]: time="2025-11-01T00:22:33.587301756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mglhw,Uid:c76f0dc0-2591-4062-8741-1604477875d5,Namespace:calico-system,Attempt:1,} returns sandbox id \"89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929\"" Nov 1 00:22:33.595227 containerd[1578]: time="2025-11-01T00:22:33.594806530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:22:33.719926 containerd[1578]: time="2025-11-01T00:22:33.719883163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d65b76bbf-mht9v,Uid:fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51\"" Nov 1 00:22:33.898914 containerd[1578]: time="2025-11-01T00:22:33.898843904Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:33.902082 containerd[1578]: time="2025-11-01T00:22:33.901042360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:22:33.902082 containerd[1578]: time="2025-11-01T00:22:33.901046971Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:22:33.902364 kubelet[2664]: E1101 00:22:33.901671 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:22:33.902364 kubelet[2664]: E1101 00:22:33.901746 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:22:33.902364 kubelet[2664]: E1101 00:22:33.902008 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cp8fz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mglhw_calico-system(c76f0dc0-2591-4062-8741-1604477875d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:33.904224 containerd[1578]: time="2025-11-01T00:22:33.904166217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:22:34.020763 kubelet[2664]: E1101 00:22:34.020621 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76d974b5c6-z26qx" podUID="70d7c9dc-5ae1-4150-b4ab-1e59c014a05a" Nov 1 00:22:34.020763 kubelet[2664]: E1101 00:22:34.020731 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:34.035455 kubelet[2664]: E1101 00:22:34.034844 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:34.084726 kubelet[2664]: I1101 00:22:34.083072 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fxl85" podStartSLOduration=48.083036055 podStartE2EDuration="48.083036055s" podCreationTimestamp="2025-11-01 00:21:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:34.080623703 +0000 UTC m=+53.128100988" watchObservedRunningTime="2025-11-01 00:22:34.083036055 +0000 UTC m=+53.130513338" Nov 1 00:22:34.118849 kubelet[2664]: I1101 00:22:34.115950 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s6zgv" podStartSLOduration=48.115812128 podStartE2EDuration="48.115812128s" podCreationTimestamp="2025-11-01 00:21:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:34.107927257 +0000 UTC m=+53.155404540" watchObservedRunningTime="2025-11-01 00:22:34.115812128 +0000 UTC m=+53.163289410" Nov 1 00:22:34.253628 containerd[1578]: time="2025-11-01T00:22:34.253299628Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:34.256592 containerd[1578]: time="2025-11-01T00:22:34.255245152Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:22:34.256592 containerd[1578]: time="2025-11-01T00:22:34.255409356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:22:34.257905 kubelet[2664]: E1101 00:22:34.257119 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:34.257905 kubelet[2664]: E1101 00:22:34.257195 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:34.257905 kubelet[2664]: E1101 00:22:34.257596 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8bpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d65b76bbf-mht9v_calico-apiserver(fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:34.259983 kubelet[2664]: E1101 00:22:34.259777 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d65b76bbf-mht9v" podUID="fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b" Nov 1 00:22:34.261520 containerd[1578]: time="2025-11-01T00:22:34.260442720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:22:34.300149 systemd-journald[1137]: Under memory pressure, flushing caches. Nov 1 00:22:34.295543 systemd-resolved[1480]: Under memory pressure, flushing caches. Nov 1 00:22:34.295585 systemd-resolved[1480]: Flushed all caches. Nov 1 00:22:34.581626 containerd[1578]: time="2025-11-01T00:22:34.581385682Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:34.583960 containerd[1578]: time="2025-11-01T00:22:34.583621591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:22:34.583960 containerd[1578]: time="2025-11-01T00:22:34.583876733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:22:34.584961 kubelet[2664]: E1101 00:22:34.584599 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:22:34.584961 kubelet[2664]: E1101 00:22:34.584681 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:22:34.587334 kubelet[2664]: E1101 00:22:34.584891 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cp8fz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mglhw_calico-system(c76f0dc0-2591-4062-8741-1604477875d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:34.589076 kubelet[2664]: E1101 00:22:34.588980 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mglhw" podUID="c76f0dc0-2591-4062-8741-1604477875d5" Nov 1 00:22:34.810510 systemd-networkd[1227]: cali65b5de9487f: Gained IPv6LL Nov 1 00:22:34.998187 systemd-networkd[1227]: cali0b7fff65878: Gained IPv6LL Nov 1 00:22:35.047225 kubelet[2664]: E1101 00:22:35.047122 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:35.048852 kubelet[2664]: E1101 00:22:35.048801 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:35.050149 kubelet[2664]: E1101 00:22:35.049083 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d65b76bbf-mht9v" podUID="fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b" Nov 1 00:22:35.052532 kubelet[2664]: E1101 00:22:35.050490 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76d974b5c6-z26qx" podUID="70d7c9dc-5ae1-4150-b4ab-1e59c014a05a" Nov 1 00:22:35.052532 kubelet[2664]: E1101 00:22:35.051184 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mglhw" podUID="c76f0dc0-2591-4062-8741-1604477875d5" Nov 1 00:22:36.050360 kubelet[2664]: E1101 00:22:36.049681 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:36.051106 kubelet[2664]: E1101 00:22:36.050794 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:36.947990 systemd[1]: Started sshd@7-165.232.144.31:22-139.178.68.195:52316.service - OpenSSH per-connection server daemon (139.178.68.195:52316). Nov 1 00:22:37.118046 sshd[4937]: Accepted publickey for core from 139.178.68.195 port 52316 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:22:37.122167 sshd[4937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:37.142769 systemd-logind[1552]: New session 8 of user core. Nov 1 00:22:37.148985 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 00:22:37.807937 sshd[4937]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:37.817632 systemd[1]: sshd@7-165.232.144.31:22-139.178.68.195:52316.service: Deactivated successfully. Nov 1 00:22:37.826999 systemd-logind[1552]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:22:37.827598 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:22:37.835414 systemd-logind[1552]: Removed session 8. Nov 1 00:22:41.160643 containerd[1578]: time="2025-11-01T00:22:41.160579960Z" level=info msg="StopPodSandbox for \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\"" Nov 1 00:22:41.374467 containerd[1578]: 2025-11-01 00:22:41.299 [WARNING][4975] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0", GenerateName:"calico-kube-controllers-76d974b5c6-", Namespace:"calico-system", SelfLink:"", UID:"70d7c9dc-5ae1-4150-b4ab-1e59c014a05a", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76d974b5c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7", Pod:"calico-kube-controllers-76d974b5c6-z26qx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8f6a620a319", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:41.374467 containerd[1578]: 2025-11-01 00:22:41.301 [INFO][4975] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Nov 1 00:22:41.374467 containerd[1578]: 2025-11-01 00:22:41.301 [INFO][4975] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" iface="eth0" netns="" Nov 1 00:22:41.374467 containerd[1578]: 2025-11-01 00:22:41.301 [INFO][4975] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Nov 1 00:22:41.374467 containerd[1578]: 2025-11-01 00:22:41.301 [INFO][4975] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Nov 1 00:22:41.374467 containerd[1578]: 2025-11-01 00:22:41.345 [INFO][4984] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" HandleID="k8s-pod-network.32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0" Nov 1 00:22:41.374467 containerd[1578]: 2025-11-01 00:22:41.345 [INFO][4984] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:41.374467 containerd[1578]: 2025-11-01 00:22:41.345 [INFO][4984] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:41.374467 containerd[1578]: 2025-11-01 00:22:41.363 [WARNING][4984] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" HandleID="k8s-pod-network.32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0" Nov 1 00:22:41.374467 containerd[1578]: 2025-11-01 00:22:41.363 [INFO][4984] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" HandleID="k8s-pod-network.32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0" Nov 1 00:22:41.374467 containerd[1578]: 2025-11-01 00:22:41.365 [INFO][4984] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:41.374467 containerd[1578]: 2025-11-01 00:22:41.368 [INFO][4975] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Nov 1 00:22:41.375715 containerd[1578]: time="2025-11-01T00:22:41.375661673Z" level=info msg="TearDown network for sandbox \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\" successfully" Nov 1 00:22:41.376327 containerd[1578]: time="2025-11-01T00:22:41.376007953Z" level=info msg="StopPodSandbox for \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\" returns successfully" Nov 1 00:22:41.380416 containerd[1578]: time="2025-11-01T00:22:41.379929436Z" level=info msg="RemovePodSandbox for \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\"" Nov 1 00:22:41.382737 containerd[1578]: time="2025-11-01T00:22:41.380769093Z" level=info msg="Forcibly stopping sandbox \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\"" Nov 1 00:22:41.610294 containerd[1578]: 2025-11-01 00:22:41.523 [WARNING][4999] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0", GenerateName:"calico-kube-controllers-76d974b5c6-", Namespace:"calico-system", SelfLink:"", UID:"70d7c9dc-5ae1-4150-b4ab-1e59c014a05a", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76d974b5c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"35e54b0e83b35a18c111e9f5550045070583150bfeb50f380f5729ff6f6b27c7", Pod:"calico-kube-controllers-76d974b5c6-z26qx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8f6a620a319", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:41.610294 containerd[1578]: 2025-11-01 00:22:41.524 [INFO][4999] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Nov 1 00:22:41.610294 containerd[1578]: 2025-11-01 00:22:41.524 [INFO][4999] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" iface="eth0" netns="" Nov 1 00:22:41.610294 containerd[1578]: 2025-11-01 00:22:41.524 [INFO][4999] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Nov 1 00:22:41.610294 containerd[1578]: 2025-11-01 00:22:41.524 [INFO][4999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Nov 1 00:22:41.610294 containerd[1578]: 2025-11-01 00:22:41.590 [INFO][5006] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" HandleID="k8s-pod-network.32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0" Nov 1 00:22:41.610294 containerd[1578]: 2025-11-01 00:22:41.595 [INFO][5006] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:41.610294 containerd[1578]: 2025-11-01 00:22:41.595 [INFO][5006] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:41.610294 containerd[1578]: 2025-11-01 00:22:41.603 [WARNING][5006] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" HandleID="k8s-pod-network.32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0" Nov 1 00:22:41.610294 containerd[1578]: 2025-11-01 00:22:41.603 [INFO][5006] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" HandleID="k8s-pod-network.32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--kube--controllers--76d974b5c6--z26qx-eth0" Nov 1 00:22:41.610294 containerd[1578]: 2025-11-01 00:22:41.605 [INFO][5006] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:41.610294 containerd[1578]: 2025-11-01 00:22:41.607 [INFO][4999] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601" Nov 1 00:22:41.614594 containerd[1578]: time="2025-11-01T00:22:41.612236290Z" level=info msg="TearDown network for sandbox \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\" successfully" Nov 1 00:22:41.623042 containerd[1578]: time="2025-11-01T00:22:41.622959292Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:22:41.624459 containerd[1578]: time="2025-11-01T00:22:41.623073419Z" level=info msg="RemovePodSandbox \"32ad9729142231025c482351ed26411b7fdf990fc7cd2cf994bb5630c0b7f601\" returns successfully" Nov 1 00:22:41.624459 containerd[1578]: time="2025-11-01T00:22:41.623947344Z" level=info msg="StopPodSandbox for \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\"" Nov 1 00:22:41.761617 containerd[1578]: 2025-11-01 00:22:41.692 [WARNING][5020] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9637455a-d2d9-41ac-be89-aeef7331b819", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01", Pod:"coredns-668d6bf9bc-fxl85", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26b32575278", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:41.761617 containerd[1578]: 2025-11-01 00:22:41.692 [INFO][5020] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Nov 1 00:22:41.761617 containerd[1578]: 2025-11-01 00:22:41.692 [INFO][5020] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" iface="eth0" netns="" Nov 1 00:22:41.761617 containerd[1578]: 2025-11-01 00:22:41.692 [INFO][5020] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Nov 1 00:22:41.761617 containerd[1578]: 2025-11-01 00:22:41.692 [INFO][5020] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Nov 1 00:22:41.761617 containerd[1578]: 2025-11-01 00:22:41.740 [INFO][5027] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" HandleID="k8s-pod-network.cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0" Nov 1 00:22:41.761617 containerd[1578]: 2025-11-01 00:22:41.740 [INFO][5027] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:41.761617 containerd[1578]: 2025-11-01 00:22:41.740 [INFO][5027] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:41.761617 containerd[1578]: 2025-11-01 00:22:41.747 [WARNING][5027] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" HandleID="k8s-pod-network.cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0" Nov 1 00:22:41.761617 containerd[1578]: 2025-11-01 00:22:41.747 [INFO][5027] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" HandleID="k8s-pod-network.cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0" Nov 1 00:22:41.761617 containerd[1578]: 2025-11-01 00:22:41.750 [INFO][5027] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:41.761617 containerd[1578]: 2025-11-01 00:22:41.754 [INFO][5020] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Nov 1 00:22:41.761617 containerd[1578]: time="2025-11-01T00:22:41.761523018Z" level=info msg="TearDown network for sandbox \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\" successfully" Nov 1 00:22:41.761617 containerd[1578]: time="2025-11-01T00:22:41.761548501Z" level=info msg="StopPodSandbox for \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\" returns successfully" Nov 1 00:22:41.763726 containerd[1578]: time="2025-11-01T00:22:41.763251369Z" level=info msg="RemovePodSandbox for \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\"" Nov 1 00:22:41.763726 containerd[1578]: time="2025-11-01T00:22:41.763302091Z" level=info msg="Forcibly stopping sandbox \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\"" Nov 1 00:22:41.912156 containerd[1578]: 2025-11-01 00:22:41.850 [WARNING][5041] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9637455a-d2d9-41ac-be89-aeef7331b819", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"392866c5340533be57add1a79c497124766ed8ab3a9585686d29ec9f95773b01", Pod:"coredns-668d6bf9bc-fxl85", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26b32575278", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:41.912156 containerd[1578]: 2025-11-01 00:22:41.853 [INFO][5041] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Nov 1 00:22:41.912156 containerd[1578]: 2025-11-01 00:22:41.853 [INFO][5041] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" iface="eth0" netns="" Nov 1 00:22:41.912156 containerd[1578]: 2025-11-01 00:22:41.853 [INFO][5041] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Nov 1 00:22:41.912156 containerd[1578]: 2025-11-01 00:22:41.853 [INFO][5041] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Nov 1 00:22:41.912156 containerd[1578]: 2025-11-01 00:22:41.896 [INFO][5048] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" HandleID="k8s-pod-network.cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0" Nov 1 00:22:41.912156 containerd[1578]: 2025-11-01 00:22:41.896 [INFO][5048] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:41.912156 containerd[1578]: 2025-11-01 00:22:41.896 [INFO][5048] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:41.912156 containerd[1578]: 2025-11-01 00:22:41.905 [WARNING][5048] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" HandleID="k8s-pod-network.cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0" Nov 1 00:22:41.912156 containerd[1578]: 2025-11-01 00:22:41.905 [INFO][5048] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" HandleID="k8s-pod-network.cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--fxl85-eth0" Nov 1 00:22:41.912156 containerd[1578]: 2025-11-01 00:22:41.907 [INFO][5048] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:41.912156 containerd[1578]: 2025-11-01 00:22:41.909 [INFO][5041] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c" Nov 1 00:22:41.913202 containerd[1578]: time="2025-11-01T00:22:41.912978645Z" level=info msg="TearDown network for sandbox \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\" successfully" Nov 1 00:22:41.916791 containerd[1578]: time="2025-11-01T00:22:41.916741156Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:22:41.917564 containerd[1578]: time="2025-11-01T00:22:41.916812616Z" level=info msg="RemovePodSandbox \"cc9eedfc7e04ff0ad441e22563513c6d4a52a67c40cbf31bc29d039a105f2d0c\" returns successfully" Nov 1 00:22:41.917998 containerd[1578]: time="2025-11-01T00:22:41.917778926Z" level=info msg="StopPodSandbox for \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\"" Nov 1 00:22:42.032526 containerd[1578]: 2025-11-01 00:22:41.972 [WARNING][5063] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-whisker--6f9d7d8847--x447l-eth0" Nov 1 00:22:42.032526 containerd[1578]: 2025-11-01 00:22:41.973 [INFO][5063] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Nov 1 00:22:42.032526 containerd[1578]: 2025-11-01 00:22:41.973 [INFO][5063] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" iface="eth0" netns="" Nov 1 00:22:42.032526 containerd[1578]: 2025-11-01 00:22:41.973 [INFO][5063] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Nov 1 00:22:42.032526 containerd[1578]: 2025-11-01 00:22:41.973 [INFO][5063] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Nov 1 00:22:42.032526 containerd[1578]: 2025-11-01 00:22:42.009 [INFO][5070] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" HandleID="k8s-pod-network.ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Workload="ci--4081.3.6--n--f16f13e513-k8s-whisker--6f9d7d8847--x447l-eth0" Nov 1 00:22:42.032526 containerd[1578]: 2025-11-01 00:22:42.010 [INFO][5070] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:42.032526 containerd[1578]: 2025-11-01 00:22:42.010 [INFO][5070] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:42.032526 containerd[1578]: 2025-11-01 00:22:42.021 [WARNING][5070] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" HandleID="k8s-pod-network.ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Workload="ci--4081.3.6--n--f16f13e513-k8s-whisker--6f9d7d8847--x447l-eth0" Nov 1 00:22:42.032526 containerd[1578]: 2025-11-01 00:22:42.021 [INFO][5070] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" HandleID="k8s-pod-network.ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Workload="ci--4081.3.6--n--f16f13e513-k8s-whisker--6f9d7d8847--x447l-eth0" Nov 1 00:22:42.032526 containerd[1578]: 2025-11-01 00:22:42.024 [INFO][5070] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:42.032526 containerd[1578]: 2025-11-01 00:22:42.027 [INFO][5063] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Nov 1 00:22:42.034108 containerd[1578]: time="2025-11-01T00:22:42.033014616Z" level=info msg="TearDown network for sandbox \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\" successfully" Nov 1 00:22:42.034108 containerd[1578]: time="2025-11-01T00:22:42.033106777Z" level=info msg="StopPodSandbox for \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\" returns successfully" Nov 1 00:22:42.034969 containerd[1578]: time="2025-11-01T00:22:42.034640633Z" level=info msg="RemovePodSandbox for \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\"" Nov 1 00:22:42.034969 containerd[1578]: time="2025-11-01T00:22:42.034685052Z" level=info msg="Forcibly stopping sandbox \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\"" Nov 1 00:22:42.191548 containerd[1578]: time="2025-11-01T00:22:42.187923900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:22:42.308033 containerd[1578]: 2025-11-01 00:22:42.133 [WARNING][5084] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" WorkloadEndpoint="ci--4081.3.6--n--f16f13e513-k8s-whisker--6f9d7d8847--x447l-eth0" Nov 1 00:22:42.308033 containerd[1578]: 2025-11-01 00:22:42.134 [INFO][5084] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Nov 1 00:22:42.308033 containerd[1578]: 2025-11-01 00:22:42.134 [INFO][5084] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" iface="eth0" netns="" Nov 1 00:22:42.308033 containerd[1578]: 2025-11-01 00:22:42.134 [INFO][5084] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Nov 1 00:22:42.308033 containerd[1578]: 2025-11-01 00:22:42.134 [INFO][5084] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Nov 1 00:22:42.308033 containerd[1578]: 2025-11-01 00:22:42.253 [INFO][5092] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" HandleID="k8s-pod-network.ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Workload="ci--4081.3.6--n--f16f13e513-k8s-whisker--6f9d7d8847--x447l-eth0" Nov 1 00:22:42.308033 containerd[1578]: 2025-11-01 00:22:42.253 [INFO][5092] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:42.308033 containerd[1578]: 2025-11-01 00:22:42.253 [INFO][5092] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:42.308033 containerd[1578]: 2025-11-01 00:22:42.283 [WARNING][5092] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" HandleID="k8s-pod-network.ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Workload="ci--4081.3.6--n--f16f13e513-k8s-whisker--6f9d7d8847--x447l-eth0" Nov 1 00:22:42.308033 containerd[1578]: 2025-11-01 00:22:42.283 [INFO][5092] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" HandleID="k8s-pod-network.ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Workload="ci--4081.3.6--n--f16f13e513-k8s-whisker--6f9d7d8847--x447l-eth0" Nov 1 00:22:42.308033 containerd[1578]: 2025-11-01 00:22:42.288 [INFO][5092] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:42.308033 containerd[1578]: 2025-11-01 00:22:42.298 [INFO][5084] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f" Nov 1 00:22:42.310181 containerd[1578]: time="2025-11-01T00:22:42.309251592Z" level=info msg="TearDown network for sandbox \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\" successfully" Nov 1 00:22:42.315067 containerd[1578]: time="2025-11-01T00:22:42.314793560Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:22:42.315067 containerd[1578]: time="2025-11-01T00:22:42.314884691Z" level=info msg="RemovePodSandbox \"ad71b563e3da55f64d2b899b5498f1a6f661e6a1fd613ca5a52468f7e22d852f\" returns successfully" Nov 1 00:22:42.316769 containerd[1578]: time="2025-11-01T00:22:42.316274898Z" level=info msg="StopPodSandbox for \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\"" Nov 1 00:22:42.447201 containerd[1578]: 2025-11-01 00:22:42.397 [WARNING][5106] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"945dd47a-80ea-4932-9742-bcde28f179e6", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc", Pod:"coredns-668d6bf9bc-s6zgv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44587a2a6ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:42.447201 containerd[1578]: 2025-11-01 00:22:42.398 [INFO][5106] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Nov 1 00:22:42.447201 containerd[1578]: 2025-11-01 00:22:42.398 [INFO][5106] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" iface="eth0" netns="" Nov 1 00:22:42.447201 containerd[1578]: 2025-11-01 00:22:42.398 [INFO][5106] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Nov 1 00:22:42.447201 containerd[1578]: 2025-11-01 00:22:42.398 [INFO][5106] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Nov 1 00:22:42.447201 containerd[1578]: 2025-11-01 00:22:42.428 [INFO][5114] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" HandleID="k8s-pod-network.8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0" Nov 1 00:22:42.447201 containerd[1578]: 2025-11-01 00:22:42.429 [INFO][5114] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:42.447201 containerd[1578]: 2025-11-01 00:22:42.429 [INFO][5114] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:42.447201 containerd[1578]: 2025-11-01 00:22:42.438 [WARNING][5114] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" HandleID="k8s-pod-network.8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0" Nov 1 00:22:42.447201 containerd[1578]: 2025-11-01 00:22:42.438 [INFO][5114] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" HandleID="k8s-pod-network.8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0" Nov 1 00:22:42.447201 containerd[1578]: 2025-11-01 00:22:42.440 [INFO][5114] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:42.447201 containerd[1578]: 2025-11-01 00:22:42.443 [INFO][5106] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Nov 1 00:22:42.447201 containerd[1578]: time="2025-11-01T00:22:42.447134602Z" level=info msg="TearDown network for sandbox \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\" successfully" Nov 1 00:22:42.449421 containerd[1578]: time="2025-11-01T00:22:42.447165539Z" level=info msg="StopPodSandbox for \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\" returns successfully" Nov 1 00:22:42.449421 containerd[1578]: time="2025-11-01T00:22:42.448613254Z" level=info msg="RemovePodSandbox for \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\"" Nov 1 00:22:42.449421 containerd[1578]: time="2025-11-01T00:22:42.448651320Z" level=info msg="Forcibly stopping sandbox \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\"" Nov 1 00:22:42.560637 containerd[1578]: time="2025-11-01T00:22:42.560423839Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:42.561704 containerd[1578]: time="2025-11-01T00:22:42.561628704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:22:42.561968 containerd[1578]: time="2025-11-01T00:22:42.561785061Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:22:42.563674 kubelet[2664]: E1101 00:22:42.563615 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:22:42.565648 kubelet[2664]: E1101 00:22:42.565485 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:22:42.566152 kubelet[2664]: E1101 00:22:42.566098 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:00518e1e75554aa7b4b7d6589bc5691e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5zf67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c49b9c9dc-rzlzn_calico-system(6a14ce9d-f1ba-4792-af7e-32782e663117): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:42.570429 containerd[1578]: time="2025-11-01T00:22:42.569195644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:22:42.582648 containerd[1578]: 2025-11-01 00:22:42.508 [WARNING][5128] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"945dd47a-80ea-4932-9742-bcde28f179e6", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"b817494a271367bbd528cbc2bab876eb897ef917c51895b8cfc033b5a2b1b3cc", Pod:"coredns-668d6bf9bc-s6zgv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44587a2a6ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:42.582648 containerd[1578]: 2025-11-01 00:22:42.509 [INFO][5128] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Nov 1 00:22:42.582648 containerd[1578]: 2025-11-01 00:22:42.509 [INFO][5128] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" iface="eth0" netns="" Nov 1 00:22:42.582648 containerd[1578]: 2025-11-01 00:22:42.509 [INFO][5128] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Nov 1 00:22:42.582648 containerd[1578]: 2025-11-01 00:22:42.509 [INFO][5128] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Nov 1 00:22:42.582648 containerd[1578]: 2025-11-01 00:22:42.553 [INFO][5135] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" HandleID="k8s-pod-network.8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0" Nov 1 00:22:42.582648 containerd[1578]: 2025-11-01 00:22:42.553 [INFO][5135] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:42.582648 containerd[1578]: 2025-11-01 00:22:42.553 [INFO][5135] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:42.582648 containerd[1578]: 2025-11-01 00:22:42.569 [WARNING][5135] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" HandleID="k8s-pod-network.8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0" Nov 1 00:22:42.582648 containerd[1578]: 2025-11-01 00:22:42.569 [INFO][5135] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" HandleID="k8s-pod-network.8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Workload="ci--4081.3.6--n--f16f13e513-k8s-coredns--668d6bf9bc--s6zgv-eth0" Nov 1 00:22:42.582648 containerd[1578]: 2025-11-01 00:22:42.574 [INFO][5135] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:42.582648 containerd[1578]: 2025-11-01 00:22:42.578 [INFO][5128] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe" Nov 1 00:22:42.582648 containerd[1578]: time="2025-11-01T00:22:42.581499887Z" level=info msg="TearDown network for sandbox \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\" successfully" Nov 1 00:22:42.595445 containerd[1578]: time="2025-11-01T00:22:42.595358639Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:22:42.596220 containerd[1578]: time="2025-11-01T00:22:42.595661019Z" level=info msg="RemovePodSandbox \"8268ae4dffbd50d76f6d698ab07b5f347539f734d0a75d3ff6f482b35445dffe\" returns successfully" Nov 1 00:22:42.597075 containerd[1578]: time="2025-11-01T00:22:42.596479557Z" level=info msg="StopPodSandbox for \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\"" Nov 1 00:22:42.727132 containerd[1578]: 2025-11-01 00:22:42.669 [WARNING][5149] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0", GenerateName:"calico-apiserver-7d65b76bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d65b76bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51", Pod:"calico-apiserver-7d65b76bbf-mht9v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65b5de9487f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:42.727132 containerd[1578]: 2025-11-01 00:22:42.670 [INFO][5149] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Nov 1 00:22:42.727132 containerd[1578]: 2025-11-01 00:22:42.670 [INFO][5149] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" iface="eth0" netns="" Nov 1 00:22:42.727132 containerd[1578]: 2025-11-01 00:22:42.670 [INFO][5149] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Nov 1 00:22:42.727132 containerd[1578]: 2025-11-01 00:22:42.670 [INFO][5149] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Nov 1 00:22:42.727132 containerd[1578]: 2025-11-01 00:22:42.706 [INFO][5156] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" HandleID="k8s-pod-network.c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0" Nov 1 00:22:42.727132 containerd[1578]: 2025-11-01 00:22:42.707 [INFO][5156] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:42.727132 containerd[1578]: 2025-11-01 00:22:42.707 [INFO][5156] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:42.727132 containerd[1578]: 2025-11-01 00:22:42.719 [WARNING][5156] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" HandleID="k8s-pod-network.c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0" Nov 1 00:22:42.727132 containerd[1578]: 2025-11-01 00:22:42.719 [INFO][5156] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" HandleID="k8s-pod-network.c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0" Nov 1 00:22:42.727132 containerd[1578]: 2025-11-01 00:22:42.722 [INFO][5156] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:42.727132 containerd[1578]: 2025-11-01 00:22:42.724 [INFO][5149] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Nov 1 00:22:42.729034 containerd[1578]: time="2025-11-01T00:22:42.728514919Z" level=info msg="TearDown network for sandbox \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\" successfully" Nov 1 00:22:42.729034 containerd[1578]: time="2025-11-01T00:22:42.728551028Z" level=info msg="StopPodSandbox for \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\" returns successfully" Nov 1 00:22:42.730258 containerd[1578]: time="2025-11-01T00:22:42.729263049Z" level=info msg="RemovePodSandbox for \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\"" Nov 1 00:22:42.730258 containerd[1578]: time="2025-11-01T00:22:42.729295115Z" level=info msg="Forcibly stopping sandbox \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\"" Nov 1 00:22:42.831936 systemd[1]: Started sshd@8-165.232.144.31:22-139.178.68.195:52318.service - OpenSSH per-connection server daemon (139.178.68.195:52318). Nov 1 00:22:42.864924 containerd[1578]: 2025-11-01 00:22:42.779 [WARNING][5170] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0", GenerateName:"calico-apiserver-7d65b76bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d65b76bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"0fd8d93b8e999c82df37fa1ce3ef0dc4983340ad8166f798c40c8c26dba1bf51", Pod:"calico-apiserver-7d65b76bbf-mht9v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65b5de9487f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:42.864924 containerd[1578]: 2025-11-01 00:22:42.780 [INFO][5170] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Nov 1 00:22:42.864924 containerd[1578]: 2025-11-01 00:22:42.780 [INFO][5170] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" iface="eth0" netns="" Nov 1 00:22:42.864924 containerd[1578]: 2025-11-01 00:22:42.780 [INFO][5170] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Nov 1 00:22:42.864924 containerd[1578]: 2025-11-01 00:22:42.780 [INFO][5170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Nov 1 00:22:42.864924 containerd[1578]: 2025-11-01 00:22:42.820 [INFO][5178] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" HandleID="k8s-pod-network.c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0" Nov 1 00:22:42.864924 containerd[1578]: 2025-11-01 00:22:42.821 [INFO][5178] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:42.864924 containerd[1578]: 2025-11-01 00:22:42.821 [INFO][5178] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:42.864924 containerd[1578]: 2025-11-01 00:22:42.841 [WARNING][5178] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" HandleID="k8s-pod-network.c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0" Nov 1 00:22:42.864924 containerd[1578]: 2025-11-01 00:22:42.841 [INFO][5178] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" HandleID="k8s-pod-network.c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--mht9v-eth0" Nov 1 00:22:42.864924 containerd[1578]: 2025-11-01 00:22:42.844 [INFO][5178] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:42.864924 containerd[1578]: 2025-11-01 00:22:42.861 [INFO][5170] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9" Nov 1 00:22:42.867133 containerd[1578]: time="2025-11-01T00:22:42.865846392Z" level=info msg="TearDown network for sandbox \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\" successfully" Nov 1 00:22:42.875719 containerd[1578]: time="2025-11-01T00:22:42.875660987Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:22:42.875965 containerd[1578]: time="2025-11-01T00:22:42.875733958Z" level=info msg="RemovePodSandbox \"c231bbceafc8d471ab1fcc597879ee9c461c52998dfd4279ae60b2c662639ff9\" returns successfully" Nov 1 00:22:42.877729 containerd[1578]: time="2025-11-01T00:22:42.877690660Z" level=info msg="StopPodSandbox for \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\"" Nov 1 00:22:42.883812 containerd[1578]: time="2025-11-01T00:22:42.883581314Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:42.884494 containerd[1578]: time="2025-11-01T00:22:42.884450172Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:22:42.884761 containerd[1578]: time="2025-11-01T00:22:42.884672448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:22:42.886179 kubelet[2664]: E1101 00:22:42.886117 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:22:42.886337 kubelet[2664]: E1101 00:22:42.886193 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:22:42.889808 kubelet[2664]: E1101 00:22:42.888050 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5zf67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c49b9c9dc-rzlzn_calico-system(6a14ce9d-f1ba-4792-af7e-32782e663117): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:42.891029 kubelet[2664]: E1101 00:22:42.890955 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c49b9c9dc-rzlzn" podUID="6a14ce9d-f1ba-4792-af7e-32782e663117" Nov 1 00:22:42.975861 sshd[5185]: Accepted publickey for core from 139.178.68.195 port 52318 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:22:42.979549 sshd[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:43.000737 systemd-logind[1552]: New session 9 of user core. Nov 1 00:22:43.005811 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 00:22:43.090574 containerd[1578]: 2025-11-01 00:22:42.989 [WARNING][5195] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c76f0dc0-2591-4062-8741-1604477875d5", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929", Pod:"csi-node-driver-mglhw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b7fff65878", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:43.090574 containerd[1578]: 2025-11-01 00:22:42.989 [INFO][5195] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Nov 1 00:22:43.090574 containerd[1578]: 2025-11-01 00:22:42.989 [INFO][5195] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" iface="eth0" netns="" Nov 1 00:22:43.090574 containerd[1578]: 2025-11-01 00:22:42.989 [INFO][5195] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Nov 1 00:22:43.090574 containerd[1578]: 2025-11-01 00:22:42.989 [INFO][5195] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Nov 1 00:22:43.090574 containerd[1578]: 2025-11-01 00:22:43.065 [INFO][5203] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" HandleID="k8s-pod-network.f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Workload="ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0" Nov 1 00:22:43.090574 containerd[1578]: 2025-11-01 00:22:43.065 [INFO][5203] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:43.090574 containerd[1578]: 2025-11-01 00:22:43.065 [INFO][5203] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:43.090574 containerd[1578]: 2025-11-01 00:22:43.075 [WARNING][5203] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" HandleID="k8s-pod-network.f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Workload="ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0" Nov 1 00:22:43.090574 containerd[1578]: 2025-11-01 00:22:43.075 [INFO][5203] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" HandleID="k8s-pod-network.f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Workload="ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0" Nov 1 00:22:43.090574 containerd[1578]: 2025-11-01 00:22:43.080 [INFO][5203] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:43.090574 containerd[1578]: 2025-11-01 00:22:43.086 [INFO][5195] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Nov 1 00:22:43.091302 containerd[1578]: time="2025-11-01T00:22:43.090609794Z" level=info msg="TearDown network for sandbox \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\" successfully" Nov 1 00:22:43.091302 containerd[1578]: time="2025-11-01T00:22:43.090666724Z" level=info msg="StopPodSandbox for \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\" returns successfully" Nov 1 00:22:43.093969 containerd[1578]: time="2025-11-01T00:22:43.093654164Z" level=info msg="RemovePodSandbox for \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\"" Nov 1 00:22:43.093969 containerd[1578]: time="2025-11-01T00:22:43.093701402Z" level=info msg="Forcibly stopping sandbox \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\"" Nov 1 00:22:43.373492 containerd[1578]: 2025-11-01 00:22:43.192 [WARNING][5223] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c76f0dc0-2591-4062-8741-1604477875d5", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"89cda22547fbd23414d99a3e44ac559c841b134a5dc502fcdd71c3b496dac929", Pod:"csi-node-driver-mglhw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b7fff65878", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:43.373492 containerd[1578]: 2025-11-01 00:22:43.193 [INFO][5223] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Nov 1 00:22:43.373492 containerd[1578]: 2025-11-01 00:22:43.193 [INFO][5223] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" iface="eth0" netns="" Nov 1 00:22:43.373492 containerd[1578]: 2025-11-01 00:22:43.193 [INFO][5223] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Nov 1 00:22:43.373492 containerd[1578]: 2025-11-01 00:22:43.193 [INFO][5223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Nov 1 00:22:43.373492 containerd[1578]: 2025-11-01 00:22:43.312 [INFO][5234] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" HandleID="k8s-pod-network.f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Workload="ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0" Nov 1 00:22:43.373492 containerd[1578]: 2025-11-01 00:22:43.313 [INFO][5234] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:43.373492 containerd[1578]: 2025-11-01 00:22:43.313 [INFO][5234] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:43.373492 containerd[1578]: 2025-11-01 00:22:43.341 [WARNING][5234] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" HandleID="k8s-pod-network.f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Workload="ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0" Nov 1 00:22:43.373492 containerd[1578]: 2025-11-01 00:22:43.342 [INFO][5234] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" HandleID="k8s-pod-network.f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Workload="ci--4081.3.6--n--f16f13e513-k8s-csi--node--driver--mglhw-eth0" Nov 1 00:22:43.373492 containerd[1578]: 2025-11-01 00:22:43.356 [INFO][5234] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:43.373492 containerd[1578]: 2025-11-01 00:22:43.364 [INFO][5223] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e" Nov 1 00:22:43.373492 containerd[1578]: time="2025-11-01T00:22:43.369742184Z" level=info msg="TearDown network for sandbox \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\" successfully" Nov 1 00:22:43.379879 containerd[1578]: time="2025-11-01T00:22:43.379823818Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:22:43.380380 containerd[1578]: time="2025-11-01T00:22:43.379897509Z" level=info msg="RemovePodSandbox \"f99b674a816c47022593b5453d3e1829c8b004116ec4b3e02916763b77c8302e\" returns successfully" Nov 1 00:22:43.383170 containerd[1578]: time="2025-11-01T00:22:43.381720823Z" level=info msg="StopPodSandbox for \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\"" Nov 1 00:22:43.432311 sshd[5185]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:43.449910 systemd[1]: sshd@8-165.232.144.31:22-139.178.68.195:52318.service: Deactivated successfully. Nov 1 00:22:43.463781 systemd-logind[1552]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:22:43.466932 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:22:43.471973 systemd-logind[1552]: Removed session 9. Nov 1 00:22:43.566816 containerd[1578]: 2025-11-01 00:22:43.487 [WARNING][5249] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"14a0cb3a-c17b-419c-80e4-76ffe3aff4c5", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f", Pod:"goldmane-666569f655-2zlcz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.17.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali42609e27a27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:43.566816 containerd[1578]: 2025-11-01 00:22:43.488 [INFO][5249] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Nov 1 00:22:43.566816 containerd[1578]: 2025-11-01 00:22:43.488 [INFO][5249] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" iface="eth0" netns="" Nov 1 00:22:43.566816 containerd[1578]: 2025-11-01 00:22:43.488 [INFO][5249] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Nov 1 00:22:43.566816 containerd[1578]: 2025-11-01 00:22:43.488 [INFO][5249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Nov 1 00:22:43.566816 containerd[1578]: 2025-11-01 00:22:43.545 [INFO][5260] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" HandleID="k8s-pod-network.926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Workload="ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0" Nov 1 00:22:43.566816 containerd[1578]: 2025-11-01 00:22:43.545 [INFO][5260] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:43.566816 containerd[1578]: 2025-11-01 00:22:43.545 [INFO][5260] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:43.566816 containerd[1578]: 2025-11-01 00:22:43.555 [WARNING][5260] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" HandleID="k8s-pod-network.926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Workload="ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0" Nov 1 00:22:43.566816 containerd[1578]: 2025-11-01 00:22:43.555 [INFO][5260] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" HandleID="k8s-pod-network.926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Workload="ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0" Nov 1 00:22:43.566816 containerd[1578]: 2025-11-01 00:22:43.558 [INFO][5260] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:43.566816 containerd[1578]: 2025-11-01 00:22:43.563 [INFO][5249] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Nov 1 00:22:43.569745 containerd[1578]: time="2025-11-01T00:22:43.567012081Z" level=info msg="TearDown network for sandbox \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\" successfully" Nov 1 00:22:43.569745 containerd[1578]: time="2025-11-01T00:22:43.567113324Z" level=info msg="StopPodSandbox for \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\" returns successfully" Nov 1 00:22:43.569745 containerd[1578]: time="2025-11-01T00:22:43.568806934Z" level=info msg="RemovePodSandbox for \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\"" Nov 1 00:22:43.569745 containerd[1578]: time="2025-11-01T00:22:43.568838700Z" level=info msg="Forcibly stopping sandbox \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\"" Nov 1 00:22:43.702366 containerd[1578]: 2025-11-01 00:22:43.631 [WARNING][5274] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"14a0cb3a-c17b-419c-80e4-76ffe3aff4c5", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"cd7638e10f7891aa340a1f7913b99e27b2aab819b267ed134d8beebc85dc4a9f", Pod:"goldmane-666569f655-2zlcz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.17.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali42609e27a27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:43.702366 containerd[1578]: 2025-11-01 00:22:43.631 [INFO][5274] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Nov 1 00:22:43.702366 containerd[1578]: 2025-11-01 00:22:43.631 [INFO][5274] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" iface="eth0" netns="" Nov 1 00:22:43.702366 containerd[1578]: 2025-11-01 00:22:43.631 [INFO][5274] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Nov 1 00:22:43.702366 containerd[1578]: 2025-11-01 00:22:43.631 [INFO][5274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Nov 1 00:22:43.702366 containerd[1578]: 2025-11-01 00:22:43.680 [INFO][5281] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" HandleID="k8s-pod-network.926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Workload="ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0" Nov 1 00:22:43.702366 containerd[1578]: 2025-11-01 00:22:43.680 [INFO][5281] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:43.702366 containerd[1578]: 2025-11-01 00:22:43.680 [INFO][5281] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:43.702366 containerd[1578]: 2025-11-01 00:22:43.692 [WARNING][5281] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" HandleID="k8s-pod-network.926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Workload="ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0" Nov 1 00:22:43.702366 containerd[1578]: 2025-11-01 00:22:43.692 [INFO][5281] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" HandleID="k8s-pod-network.926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Workload="ci--4081.3.6--n--f16f13e513-k8s-goldmane--666569f655--2zlcz-eth0" Nov 1 00:22:43.702366 containerd[1578]: 2025-11-01 00:22:43.696 [INFO][5281] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:43.702366 containerd[1578]: 2025-11-01 00:22:43.698 [INFO][5274] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e" Nov 1 00:22:43.702366 containerd[1578]: time="2025-11-01T00:22:43.701521371Z" level=info msg="TearDown network for sandbox \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\" successfully" Nov 1 00:22:43.706691 containerd[1578]: time="2025-11-01T00:22:43.705288875Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:22:43.706691 containerd[1578]: time="2025-11-01T00:22:43.705349077Z" level=info msg="RemovePodSandbox \"926ecd67f69ed8a1d0e89f9a466e64cc2823c2f3cccd81c199579f2fd44dc04e\" returns successfully" Nov 1 00:22:43.706691 containerd[1578]: time="2025-11-01T00:22:43.705810749Z" level=info msg="StopPodSandbox for \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\"" Nov 1 00:22:43.821869 containerd[1578]: 2025-11-01 00:22:43.763 [WARNING][5297] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0", GenerateName:"calico-apiserver-7d65b76bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"1172650d-8656-4c06-afa1-e156b3ef1286", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d65b76bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943", Pod:"calico-apiserver-7d65b76bbf-shhvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali18f1a722935", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:43.821869 containerd[1578]: 2025-11-01 00:22:43.763 [INFO][5297] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Nov 1 00:22:43.821869 containerd[1578]: 2025-11-01 00:22:43.763 [INFO][5297] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" iface="eth0" netns="" Nov 1 00:22:43.821869 containerd[1578]: 2025-11-01 00:22:43.763 [INFO][5297] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Nov 1 00:22:43.821869 containerd[1578]: 2025-11-01 00:22:43.763 [INFO][5297] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Nov 1 00:22:43.821869 containerd[1578]: 2025-11-01 00:22:43.805 [INFO][5305] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" HandleID="k8s-pod-network.b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0" Nov 1 00:22:43.821869 containerd[1578]: 2025-11-01 00:22:43.806 [INFO][5305] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:43.821869 containerd[1578]: 2025-11-01 00:22:43.806 [INFO][5305] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:43.821869 containerd[1578]: 2025-11-01 00:22:43.814 [WARNING][5305] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" HandleID="k8s-pod-network.b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0" Nov 1 00:22:43.821869 containerd[1578]: 2025-11-01 00:22:43.814 [INFO][5305] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" HandleID="k8s-pod-network.b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0" Nov 1 00:22:43.821869 containerd[1578]: 2025-11-01 00:22:43.817 [INFO][5305] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:43.821869 containerd[1578]: 2025-11-01 00:22:43.819 [INFO][5297] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Nov 1 00:22:43.824858 containerd[1578]: time="2025-11-01T00:22:43.821922758Z" level=info msg="TearDown network for sandbox \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\" successfully" Nov 1 00:22:43.824858 containerd[1578]: time="2025-11-01T00:22:43.821950962Z" level=info msg="StopPodSandbox for \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\" returns successfully" Nov 1 00:22:43.827509 containerd[1578]: time="2025-11-01T00:22:43.825415621Z" level=info msg="RemovePodSandbox for \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\"" Nov 1 00:22:43.827509 containerd[1578]: time="2025-11-01T00:22:43.825454586Z" level=info msg="Forcibly stopping sandbox \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\"" Nov 1 00:22:43.943407 containerd[1578]: 2025-11-01 00:22:43.885 [WARNING][5320] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0", GenerateName:"calico-apiserver-7d65b76bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"1172650d-8656-4c06-afa1-e156b3ef1286", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d65b76bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f16f13e513", ContainerID:"7a9865996e7ebe93bf148cf9ee17e8aca4299aec1c404a174f13dce9b9d04943", Pod:"calico-apiserver-7d65b76bbf-shhvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali18f1a722935", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:43.943407 containerd[1578]: 2025-11-01 00:22:43.885 [INFO][5320] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Nov 1 00:22:43.943407 containerd[1578]: 2025-11-01 00:22:43.885 [INFO][5320] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" iface="eth0" netns="" Nov 1 00:22:43.943407 containerd[1578]: 2025-11-01 00:22:43.885 [INFO][5320] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Nov 1 00:22:43.943407 containerd[1578]: 2025-11-01 00:22:43.885 [INFO][5320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Nov 1 00:22:43.943407 containerd[1578]: 2025-11-01 00:22:43.927 [INFO][5328] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" HandleID="k8s-pod-network.b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0" Nov 1 00:22:43.943407 containerd[1578]: 2025-11-01 00:22:43.929 [INFO][5328] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:43.943407 containerd[1578]: 2025-11-01 00:22:43.929 [INFO][5328] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:43.943407 containerd[1578]: 2025-11-01 00:22:43.935 [WARNING][5328] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" HandleID="k8s-pod-network.b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0" Nov 1 00:22:43.943407 containerd[1578]: 2025-11-01 00:22:43.935 [INFO][5328] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" HandleID="k8s-pod-network.b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Workload="ci--4081.3.6--n--f16f13e513-k8s-calico--apiserver--7d65b76bbf--shhvk-eth0" Nov 1 00:22:43.943407 containerd[1578]: 2025-11-01 00:22:43.937 [INFO][5328] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:43.943407 containerd[1578]: 2025-11-01 00:22:43.940 [INFO][5320] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9" Nov 1 00:22:43.944115 containerd[1578]: time="2025-11-01T00:22:43.943466390Z" level=info msg="TearDown network for sandbox \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\" successfully" Nov 1 00:22:43.946111 containerd[1578]: time="2025-11-01T00:22:43.946046481Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:22:43.946326 containerd[1578]: time="2025-11-01T00:22:43.946148954Z" level=info msg="RemovePodSandbox \"b37f3e33208b9a2722742417714503b1bb29cf5b03551d8ac2b696be6c296fe9\" returns successfully" Nov 1 00:22:44.178345 containerd[1578]: time="2025-11-01T00:22:44.178285312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:22:44.508903 containerd[1578]: time="2025-11-01T00:22:44.508534137Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:44.510469 containerd[1578]: time="2025-11-01T00:22:44.510310820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:22:44.510469 containerd[1578]: time="2025-11-01T00:22:44.510390041Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:22:44.510988 kubelet[2664]: E1101 00:22:44.510951 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:44.511931 kubelet[2664]: E1101 00:22:44.511055 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:44.511931 kubelet[2664]: E1101 00:22:44.511709 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-spw79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d65b76bbf-shhvk_calico-apiserver(1172650d-8656-4c06-afa1-e156b3ef1286): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:44.512940 kubelet[2664]: E1101 00:22:44.512874 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d65b76bbf-shhvk" podUID="1172650d-8656-4c06-afa1-e156b3ef1286" Nov 1 00:22:46.178409 containerd[1578]: time="2025-11-01T00:22:46.177930054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:22:46.651106 containerd[1578]: time="2025-11-01T00:22:46.651020002Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:46.653277 containerd[1578]: time="2025-11-01T00:22:46.653186301Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:22:46.654602 containerd[1578]: time="2025-11-01T00:22:46.653365225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:22:46.654858 kubelet[2664]: E1101 00:22:46.654725 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:46.654858 kubelet[2664]: E1101 00:22:46.654835 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:46.657266 kubelet[2664]: E1101 00:22:46.655116 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8bpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d65b76bbf-mht9v_calico-apiserver(fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:46.657266 kubelet[2664]: E1101 00:22:46.656884 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d65b76bbf-mht9v" podUID="fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b" Nov 1 00:22:46.657601 containerd[1578]: time="2025-11-01T00:22:46.655350737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:22:46.962686 containerd[1578]: time="2025-11-01T00:22:46.962464090Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:46.968438 containerd[1578]: time="2025-11-01T00:22:46.964615783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:22:46.970449 containerd[1578]: time="2025-11-01T00:22:46.970174860Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:22:46.973419 kubelet[2664]: E1101 00:22:46.970846 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:22:46.973419 kubelet[2664]: E1101 00:22:46.970978 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:22:46.973419 kubelet[2664]: E1101 00:22:46.971163 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gwswh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76d974b5c6-z26qx_calico-system(70d7c9dc-5ae1-4150-b4ab-1e59c014a05a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:46.974119 kubelet[2664]: E1101 00:22:46.973933 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76d974b5c6-z26qx" podUID="70d7c9dc-5ae1-4150-b4ab-1e59c014a05a" Nov 1 00:22:47.199400 containerd[1578]: time="2025-11-01T00:22:47.198972516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:22:47.517130 containerd[1578]: time="2025-11-01T00:22:47.517061486Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:47.517969 containerd[1578]: time="2025-11-01T00:22:47.517895580Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:22:47.518063 containerd[1578]: time="2025-11-01T00:22:47.518017225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:22:47.520557 kubelet[2664]: E1101 00:22:47.518198 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:22:47.520557 kubelet[2664]: E1101 00:22:47.518303 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:22:47.520557 kubelet[2664]: E1101 00:22:47.518485 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2cp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2zlcz_calico-system(14a0cb3a-c17b-419c-80e4-76ffe3aff4c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:47.520557 kubelet[2664]: E1101 00:22:47.520361 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2zlcz" podUID="14a0cb3a-c17b-419c-80e4-76ffe3aff4c5" Nov 1 00:22:48.451747 systemd[1]: Started sshd@9-165.232.144.31:22-139.178.68.195:53090.service - OpenSSH per-connection server daemon (139.178.68.195:53090). Nov 1 00:22:48.520422 sshd[5342]: Accepted publickey for core from 139.178.68.195 port 53090 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:22:48.531075 sshd[5342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:48.554312 systemd-logind[1552]: New session 10 of user core. Nov 1 00:22:48.563977 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 00:22:48.804729 sshd[5342]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:48.815879 systemd[1]: Started sshd@10-165.232.144.31:22-139.178.68.195:53104.service - OpenSSH per-connection server daemon (139.178.68.195:53104). Nov 1 00:22:48.818585 systemd[1]: sshd@9-165.232.144.31:22-139.178.68.195:53090.service: Deactivated successfully. Nov 1 00:22:48.828433 systemd-logind[1552]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:22:48.842868 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:22:48.847984 systemd-logind[1552]: Removed session 10. Nov 1 00:22:48.891611 sshd[5354]: Accepted publickey for core from 139.178.68.195 port 53104 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:22:48.893722 sshd[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:48.899541 systemd-logind[1552]: New session 11 of user core. Nov 1 00:22:48.904846 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 00:22:49.181646 containerd[1578]: time="2025-11-01T00:22:49.181534681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:22:49.227214 sshd[5354]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:49.245067 systemd[1]: Started sshd@11-165.232.144.31:22-139.178.68.195:53118.service - OpenSSH per-connection server daemon (139.178.68.195:53118). Nov 1 00:22:49.254637 systemd[1]: sshd@10-165.232.144.31:22-139.178.68.195:53104.service: Deactivated successfully. Nov 1 00:22:49.273864 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:22:49.283490 systemd-logind[1552]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:22:49.289214 systemd-logind[1552]: Removed session 11. Nov 1 00:22:49.362409 sshd[5366]: Accepted publickey for core from 139.178.68.195 port 53118 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:22:49.364538 sshd[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:49.371665 systemd-logind[1552]: New session 12 of user core. Nov 1 00:22:49.377694 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 00:22:49.523162 containerd[1578]: time="2025-11-01T00:22:49.522930968Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:49.525328 containerd[1578]: time="2025-11-01T00:22:49.525253123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:22:49.525550 containerd[1578]: time="2025-11-01T00:22:49.525391738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:22:49.527621 kubelet[2664]: E1101 00:22:49.527546 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:22:49.527621 kubelet[2664]: E1101 00:22:49.527623 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:22:49.535007 kubelet[2664]: E1101 00:22:49.532081 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cp8fz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mglhw_calico-system(c76f0dc0-2591-4062-8741-1604477875d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:49.539028 containerd[1578]: time="2025-11-01T00:22:49.538941154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:22:49.581174 sshd[5366]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:49.588862 systemd[1]: sshd@11-165.232.144.31:22-139.178.68.195:53118.service: Deactivated successfully. Nov 1 00:22:49.606821 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:22:49.613919 systemd-logind[1552]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:22:49.618112 systemd-logind[1552]: Removed session 12. Nov 1 00:22:49.877917 containerd[1578]: time="2025-11-01T00:22:49.876530732Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:49.878495 containerd[1578]: time="2025-11-01T00:22:49.877787102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:22:49.879299 containerd[1578]: time="2025-11-01T00:22:49.877848715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:22:49.879365 kubelet[2664]: E1101 00:22:49.878954 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:22:49.880010 kubelet[2664]: E1101 00:22:49.879945 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:22:49.881296 kubelet[2664]: E1101 00:22:49.880176 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cp8fz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mglhw_calico-system(c76f0dc0-2591-4062-8741-1604477875d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:49.881562 kubelet[2664]: E1101 00:22:49.881508 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mglhw" podUID="c76f0dc0-2591-4062-8741-1604477875d5" Nov 1 00:22:54.591824 systemd[1]: Started sshd@12-165.232.144.31:22-139.178.68.195:46880.service - OpenSSH per-connection server daemon (139.178.68.195:46880). Nov 1 00:22:54.669588 sshd[5391]: Accepted publickey for core from 139.178.68.195 port 46880 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:22:54.676444 sshd[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:54.701527 systemd-logind[1552]: New session 13 of user core. Nov 1 00:22:54.707500 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 00:22:54.896130 sshd[5391]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:54.904413 systemd[1]: sshd@12-165.232.144.31:22-139.178.68.195:46880.service: Deactivated successfully. Nov 1 00:22:54.911292 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:22:54.912743 systemd-logind[1552]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:22:54.914000 systemd-logind[1552]: Removed session 13. Nov 1 00:22:56.178864 kubelet[2664]: E1101 00:22:56.178797 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:57.178075 kubelet[2664]: E1101 00:22:57.177953 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:58.179552 kubelet[2664]: E1101 00:22:58.179501 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c49b9c9dc-rzlzn" podUID="6a14ce9d-f1ba-4792-af7e-32782e663117" Nov 1 00:22:59.178069 kubelet[2664]: E1101 00:22:59.178006 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:22:59.916928 systemd[1]: Started sshd@13-165.232.144.31:22-139.178.68.195:46882.service - OpenSSH per-connection server daemon (139.178.68.195:46882). Nov 1 00:22:59.992817 sshd[5426]: Accepted publickey for core from 139.178.68.195 port 46882 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:22:59.994326 sshd[5426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:00.011810 systemd-logind[1552]: New session 14 of user core. Nov 1 00:23:00.022079 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 00:23:00.181138 kubelet[2664]: E1101 00:23:00.178992 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d65b76bbf-shhvk" podUID="1172650d-8656-4c06-afa1-e156b3ef1286" Nov 1 00:23:00.271274 sshd[5426]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:00.279896 systemd[1]: sshd@13-165.232.144.31:22-139.178.68.195:46882.service: Deactivated successfully. Nov 1 00:23:00.292426 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:23:00.300821 systemd-logind[1552]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:23:00.305599 systemd-logind[1552]: Removed session 14. Nov 1 00:23:01.183820 kubelet[2664]: E1101 00:23:01.181690 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d65b76bbf-mht9v" podUID="fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b" Nov 1 00:23:01.183820 kubelet[2664]: E1101 00:23:01.182257 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76d974b5c6-z26qx" podUID="70d7c9dc-5ae1-4150-b4ab-1e59c014a05a" Nov 1 00:23:02.180527 kubelet[2664]: E1101 00:23:02.180371 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2zlcz" podUID="14a0cb3a-c17b-419c-80e4-76ffe3aff4c5" Nov 1 00:23:03.181808 kubelet[2664]: E1101 00:23:03.180751 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mglhw" podUID="c76f0dc0-2591-4062-8741-1604477875d5" Nov 1 00:23:05.284110 systemd[1]: Started sshd@14-165.232.144.31:22-139.178.68.195:52818.service - OpenSSH per-connection server daemon (139.178.68.195:52818). Nov 1 00:23:05.348228 sshd[5442]: Accepted publickey for core from 139.178.68.195 port 52818 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:23:05.351994 sshd[5442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:05.368920 systemd-logind[1552]: New session 15 of user core. Nov 1 00:23:05.375310 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 00:23:05.586919 sshd[5442]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:05.600897 systemd-logind[1552]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:23:05.601328 systemd[1]: sshd@14-165.232.144.31:22-139.178.68.195:52818.service: Deactivated successfully. Nov 1 00:23:05.618170 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:23:05.627556 systemd-logind[1552]: Removed session 15. Nov 1 00:23:09.182517 containerd[1578]: time="2025-11-01T00:23:09.182076675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:23:09.509683 containerd[1578]: time="2025-11-01T00:23:09.508732437Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:09.510662 containerd[1578]: time="2025-11-01T00:23:09.509834960Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:23:09.510662 containerd[1578]: time="2025-11-01T00:23:09.509960832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:23:09.511670 kubelet[2664]: E1101 00:23:09.511593 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:09.513055 kubelet[2664]: E1101 00:23:09.511684 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:09.513055 kubelet[2664]: E1101 00:23:09.511852 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:00518e1e75554aa7b4b7d6589bc5691e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5zf67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c49b9c9dc-rzlzn_calico-system(6a14ce9d-f1ba-4792-af7e-32782e663117): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:09.518425 containerd[1578]: time="2025-11-01T00:23:09.517478257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:23:09.832041 containerd[1578]: time="2025-11-01T00:23:09.831813115Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:09.834461 containerd[1578]: time="2025-11-01T00:23:09.833367738Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:23:09.834461 containerd[1578]: time="2025-11-01T00:23:09.833473472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:09.835087 kubelet[2664]: E1101 00:23:09.835031 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:09.835262 kubelet[2664]: E1101 00:23:09.835243 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:09.835491 kubelet[2664]: E1101 00:23:09.835457 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5zf67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c49b9c9dc-rzlzn_calico-system(6a14ce9d-f1ba-4792-af7e-32782e663117): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:09.837452 kubelet[2664]: E1101 00:23:09.837368 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c49b9c9dc-rzlzn" podUID="6a14ce9d-f1ba-4792-af7e-32782e663117" Nov 1 00:23:10.603907 systemd[1]: Started sshd@15-165.232.144.31:22-139.178.68.195:52828.service - OpenSSH per-connection server daemon (139.178.68.195:52828). Nov 1 00:23:10.693796 sshd[5462]: Accepted publickey for core from 139.178.68.195 port 52828 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:23:10.697659 sshd[5462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:10.716860 systemd-logind[1552]: New session 16 of user core. Nov 1 00:23:10.720922 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 00:23:10.959788 sshd[5462]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:10.970914 systemd[1]: Started sshd@16-165.232.144.31:22-139.178.68.195:52840.service - OpenSSH per-connection server daemon (139.178.68.195:52840). Nov 1 00:23:10.976908 systemd[1]: sshd@15-165.232.144.31:22-139.178.68.195:52828.service: Deactivated successfully. Nov 1 00:23:10.994826 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:23:11.005529 systemd-logind[1552]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:23:11.015596 systemd-logind[1552]: Removed session 16. Nov 1 00:23:11.074781 sshd[5473]: Accepted publickey for core from 139.178.68.195 port 52840 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:23:11.078049 sshd[5473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:11.090656 systemd-logind[1552]: New session 17 of user core. Nov 1 00:23:11.101639 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 00:23:11.631638 sshd[5473]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:11.652054 systemd[1]: Started sshd@17-165.232.144.31:22-139.178.68.195:52854.service - OpenSSH per-connection server daemon (139.178.68.195:52854). Nov 1 00:23:11.652965 systemd[1]: sshd@16-165.232.144.31:22-139.178.68.195:52840.service: Deactivated successfully. Nov 1 00:23:11.677693 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:23:11.688493 systemd-logind[1552]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:23:11.693488 systemd-logind[1552]: Removed session 17. Nov 1 00:23:11.762438 sshd[5485]: Accepted publickey for core from 139.178.68.195 port 52854 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:23:11.763255 sshd[5485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:11.784777 systemd-logind[1552]: New session 18 of user core. Nov 1 00:23:11.793718 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 00:23:12.180419 containerd[1578]: time="2025-11-01T00:23:12.179491838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:12.496752 containerd[1578]: time="2025-11-01T00:23:12.496562467Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:12.500162 containerd[1578]: time="2025-11-01T00:23:12.500047462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:12.500349 containerd[1578]: time="2025-11-01T00:23:12.500100946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:12.502532 kubelet[2664]: E1101 00:23:12.500462 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:12.502532 kubelet[2664]: E1101 00:23:12.500528 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:12.502532 kubelet[2664]: E1101 00:23:12.500682 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-spw79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d65b76bbf-shhvk_calico-apiserver(1172650d-8656-4c06-afa1-e156b3ef1286): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:12.503298 kubelet[2664]: E1101 00:23:12.502530 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d65b76bbf-shhvk" podUID="1172650d-8656-4c06-afa1-e156b3ef1286" Nov 1 00:23:12.805620 sshd[5485]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:12.831130 systemd[1]: Started sshd@18-165.232.144.31:22-139.178.68.195:52856.service - OpenSSH per-connection server daemon (139.178.68.195:52856). Nov 1 00:23:12.831982 systemd[1]: sshd@17-165.232.144.31:22-139.178.68.195:52854.service: Deactivated successfully. Nov 1 00:23:12.852730 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:23:12.856921 systemd-logind[1552]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:23:12.864694 systemd-logind[1552]: Removed session 18. Nov 1 00:23:12.974790 sshd[5503]: Accepted publickey for core from 139.178.68.195 port 52856 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:23:12.977332 sshd[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:12.992648 systemd-logind[1552]: New session 19 of user core. Nov 1 00:23:12.998663 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 00:23:13.200931 containerd[1578]: time="2025-11-01T00:23:13.200342171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:23:13.518021 containerd[1578]: time="2025-11-01T00:23:13.516331317Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:13.518637 containerd[1578]: time="2025-11-01T00:23:13.518566404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:23:13.518754 containerd[1578]: time="2025-11-01T00:23:13.518714968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:13.520359 kubelet[2664]: E1101 00:23:13.520286 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:13.521089 kubelet[2664]: E1101 00:23:13.520389 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:13.523138 kubelet[2664]: E1101 00:23:13.522612 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2cp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2zlcz_calico-system(14a0cb3a-c17b-419c-80e4-76ffe3aff4c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:13.523925 kubelet[2664]: E1101 00:23:13.523874 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2zlcz" podUID="14a0cb3a-c17b-419c-80e4-76ffe3aff4c5" Nov 1 00:23:13.701172 sshd[5503]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:13.725897 systemd[1]: Started sshd@19-165.232.144.31:22-139.178.68.195:36984.service - OpenSSH per-connection server daemon (139.178.68.195:36984). Nov 1 00:23:13.727331 systemd[1]: sshd@18-165.232.144.31:22-139.178.68.195:52856.service: Deactivated successfully. Nov 1 00:23:13.737177 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:23:13.739196 systemd-logind[1552]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:23:13.744556 systemd-logind[1552]: Removed session 19. Nov 1 00:23:13.843548 sshd[5519]: Accepted publickey for core from 139.178.68.195 port 36984 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:23:13.849769 sshd[5519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:13.860287 systemd-logind[1552]: New session 20 of user core. Nov 1 00:23:13.864541 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 00:23:14.104657 sshd[5519]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:14.113110 systemd-logind[1552]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:23:14.117792 systemd[1]: sshd@19-165.232.144.31:22-139.178.68.195:36984.service: Deactivated successfully. Nov 1 00:23:14.126706 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:23:14.129522 systemd-logind[1552]: Removed session 20. Nov 1 00:23:14.312796 systemd-journald[1137]: Under memory pressure, flushing caches. Nov 1 00:23:14.309366 systemd-resolved[1480]: Under memory pressure, flushing caches. Nov 1 00:23:14.309413 systemd-resolved[1480]: Flushed all caches. Nov 1 00:23:15.187412 containerd[1578]: time="2025-11-01T00:23:15.186369252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:23:15.505846 containerd[1578]: time="2025-11-01T00:23:15.505335132Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:15.506863 containerd[1578]: time="2025-11-01T00:23:15.506807868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:23:15.507219 containerd[1578]: time="2025-11-01T00:23:15.506829108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:23:15.507462 kubelet[2664]: E1101 00:23:15.507404 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:15.508786 kubelet[2664]: E1101 00:23:15.507484 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:15.509066 containerd[1578]: time="2025-11-01T00:23:15.508484493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:15.512195 kubelet[2664]: E1101 00:23:15.512111 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cp8fz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mglhw_calico-system(c76f0dc0-2591-4062-8741-1604477875d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:15.806053 containerd[1578]: time="2025-11-01T00:23:15.805671657Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:15.807295 containerd[1578]: time="2025-11-01T00:23:15.807164103Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:15.807422 containerd[1578]: time="2025-11-01T00:23:15.807255944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:15.807728 kubelet[2664]: E1101 00:23:15.807645 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:15.808132 kubelet[2664]: E1101 00:23:15.807728 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:15.808132 kubelet[2664]: E1101 00:23:15.807961 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8bpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d65b76bbf-mht9v_calico-apiserver(fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:15.809110 containerd[1578]: time="2025-11-01T00:23:15.808849775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:23:15.810042 kubelet[2664]: E1101 00:23:15.809948 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d65b76bbf-mht9v" podUID="fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b" Nov 1 00:23:16.118652 containerd[1578]: time="2025-11-01T00:23:16.118437345Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:16.121572 containerd[1578]: time="2025-11-01T00:23:16.119521456Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:23:16.121572 containerd[1578]: time="2025-11-01T00:23:16.119638513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:23:16.121830 kubelet[2664]: E1101 00:23:16.120199 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:16.121830 kubelet[2664]: E1101 00:23:16.120276 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:16.121830 kubelet[2664]: E1101 00:23:16.120499 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cp8fz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mglhw_calico-system(c76f0dc0-2591-4062-8741-1604477875d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:16.122315 kubelet[2664]: E1101 00:23:16.122234 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mglhw" podUID="c76f0dc0-2591-4062-8741-1604477875d5" Nov 1 00:23:16.176288 kubelet[2664]: E1101 00:23:16.176228 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:23:16.180406 containerd[1578]: time="2025-11-01T00:23:16.178305741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:23:16.519043 containerd[1578]: time="2025-11-01T00:23:16.518929660Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:16.520076 containerd[1578]: time="2025-11-01T00:23:16.520013234Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:23:16.520198 containerd[1578]: time="2025-11-01T00:23:16.520151457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:16.520537 kubelet[2664]: E1101 00:23:16.520438 2664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:16.520537 kubelet[2664]: E1101 00:23:16.520499 2664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:16.521955 kubelet[2664]: E1101 00:23:16.520675 2664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gwswh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76d974b5c6-z26qx_calico-system(70d7c9dc-5ae1-4150-b4ab-1e59c014a05a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:16.522363 kubelet[2664]: E1101 00:23:16.522272 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76d974b5c6-z26qx" podUID="70d7c9dc-5ae1-4150-b4ab-1e59c014a05a" Nov 1 00:23:19.120851 systemd[1]: Started sshd@20-165.232.144.31:22-139.178.68.195:36990.service - OpenSSH per-connection server daemon (139.178.68.195:36990). Nov 1 00:23:19.224493 sshd[5541]: Accepted publickey for core from 139.178.68.195 port 36990 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:23:19.224883 sshd[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:19.242126 systemd-logind[1552]: New session 21 of user core. Nov 1 00:23:19.253596 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 00:23:19.552959 sshd[5541]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:19.566341 systemd-logind[1552]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:23:19.572968 systemd[1]: sshd@20-165.232.144.31:22-139.178.68.195:36990.service: Deactivated successfully. Nov 1 00:23:19.597680 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:23:19.603700 systemd-logind[1552]: Removed session 21. Nov 1 00:23:21.185451 kubelet[2664]: E1101 00:23:21.182669 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c49b9c9dc-rzlzn" podUID="6a14ce9d-f1ba-4792-af7e-32782e663117" Nov 1 00:23:24.175962 kubelet[2664]: E1101 00:23:24.175877 2664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:23:24.580633 systemd[1]: Started sshd@21-165.232.144.31:22-139.178.68.195:50964.service - OpenSSH per-connection server daemon (139.178.68.195:50964). Nov 1 00:23:24.706451 sshd[5557]: Accepted publickey for core from 139.178.68.195 port 50964 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:23:24.710114 sshd[5557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:24.717811 systemd-logind[1552]: New session 22 of user core. Nov 1 00:23:24.727442 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 00:23:25.131016 sshd[5557]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:25.136835 systemd[1]: sshd@21-165.232.144.31:22-139.178.68.195:50964.service: Deactivated successfully. Nov 1 00:23:25.145771 systemd-logind[1552]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:23:25.149290 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:23:25.152830 systemd-logind[1552]: Removed session 22. Nov 1 00:23:26.176563 kubelet[2664]: E1101 00:23:26.176173 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d65b76bbf-shhvk" podUID="1172650d-8656-4c06-afa1-e156b3ef1286" Nov 1 00:23:27.177650 kubelet[2664]: E1101 00:23:27.177523 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2zlcz" podUID="14a0cb3a-c17b-419c-80e4-76ffe3aff4c5" Nov 1 00:23:28.176657 kubelet[2664]: E1101 00:23:28.176184 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76d974b5c6-z26qx" podUID="70d7c9dc-5ae1-4150-b4ab-1e59c014a05a" Nov 1 00:23:30.144927 systemd[1]: Started sshd@22-165.232.144.31:22-139.178.68.195:50978.service - OpenSSH per-connection server daemon (139.178.68.195:50978). Nov 1 00:23:30.244409 sshd[5593]: Accepted publickey for core from 139.178.68.195 port 50978 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:23:30.250455 sshd[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:30.260666 systemd-logind[1552]: New session 23 of user core. Nov 1 00:23:30.267145 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 00:23:30.606505 sshd[5593]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:30.613613 systemd[1]: sshd@22-165.232.144.31:22-139.178.68.195:50978.service: Deactivated successfully. Nov 1 00:23:30.617590 systemd-logind[1552]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:23:30.618359 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:23:30.621040 systemd-logind[1552]: Removed session 23. Nov 1 00:23:31.195231 kubelet[2664]: E1101 00:23:31.194940 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d65b76bbf-mht9v" podUID="fb1c3b1f-8037-4dd2-9e98-0b3c9fc3294b" Nov 1 00:23:31.200035 kubelet[2664]: E1101 00:23:31.199929 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mglhw" podUID="c76f0dc0-2591-4062-8741-1604477875d5" Nov 1 00:23:34.178447 kubelet[2664]: E1101 00:23:34.178357 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c49b9c9dc-rzlzn" podUID="6a14ce9d-f1ba-4792-af7e-32782e663117"