Nov 8 00:19:51.001400 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:19:51.001425 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:19:51.001438 kernel: BIOS-provided physical RAM map: Nov 8 00:19:51.001445 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 8 00:19:51.001451 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 8 00:19:51.001458 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 8 00:19:51.001466 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 8 00:19:51.001472 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 8 00:19:51.001479 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 8 00:19:51.001488 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 8 00:19:51.001495 kernel: NX (Execute Disable) protection: active Nov 8 00:19:51.001502 kernel: APIC: Static calls initialized Nov 8 00:19:51.001512 kernel: SMBIOS 2.8 present. Nov 8 00:19:51.001519 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 8 00:19:51.001528 kernel: Hypervisor detected: KVM Nov 8 00:19:51.001538 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:19:51.001549 kernel: kvm-clock: using sched offset of 3358115591 cycles Nov 8 00:19:51.001557 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:19:51.001565 kernel: tsc: Detected 2494.138 MHz processor Nov 8 00:19:51.001573 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:19:51.001583 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:19:51.001595 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 8 00:19:51.001607 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 8 00:19:51.001619 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:19:51.001635 kernel: ACPI: Early table checksum verification disabled Nov 8 00:19:51.001647 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 8 00:19:51.001656 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:19:51.001664 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:19:51.001672 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:19:51.001679 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 8 00:19:51.001687 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:19:51.003724 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:19:51.003748 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:19:51.003768 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:19:51.003782 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 8 00:19:51.003793 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 8 00:19:51.003803 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 8 00:19:51.003816 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 8 00:19:51.003829 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 8 00:19:51.003837 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 8 00:19:51.003850 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 8 00:19:51.003861 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:19:51.003869 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 8 00:19:51.003879 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 8 00:19:51.003893 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 8 00:19:51.003914 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Nov 8 00:19:51.003929 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Nov 8 00:19:51.003945 kernel: Zone ranges: Nov 8 00:19:51.003953 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:19:51.003961 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 8 00:19:51.003970 kernel: Normal empty Nov 8 00:19:51.003978 kernel: Movable zone start for each node Nov 8 00:19:51.003986 kernel: Early memory node ranges Nov 8 00:19:51.003995 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 8 00:19:51.004003 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 8 00:19:51.004011 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 8 00:19:51.004023 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:19:51.004031 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 8 00:19:51.004042 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 8 00:19:51.004051 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:19:51.004059 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:19:51.004067 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:19:51.004076 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:19:51.004084 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:19:51.004092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:19:51.004103 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:19:51.004112 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:19:51.004120 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:19:51.004128 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:19:51.004136 kernel: TSC deadline timer available Nov 8 00:19:51.004144 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:19:51.004152 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:19:51.004160 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 8 00:19:51.004172 kernel: Booting paravirtualized kernel on KVM Nov 8 00:19:51.004181 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:19:51.004193 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:19:51.004201 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:19:51.004209 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:19:51.004217 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:19:51.004226 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 8 00:19:51.004235 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:19:51.004244 kernel: random: crng init done Nov 8 00:19:51.004252 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:19:51.004263 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:19:51.004271 kernel: Fallback order for Node 0: 0 Nov 8 00:19:51.004280 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Nov 8 00:19:51.004288 kernel: Policy zone: DMA32 Nov 8 00:19:51.004296 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:19:51.004305 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 125148K reserved, 0K cma-reserved) Nov 8 00:19:51.004318 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:19:51.004331 kernel: Kernel/User page tables isolation: enabled Nov 8 00:19:51.004343 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:19:51.004357 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:19:51.004371 kernel: Dynamic Preempt: voluntary Nov 8 00:19:51.004385 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:19:51.004394 kernel: rcu: RCU event tracing is enabled. Nov 8 00:19:51.004402 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:19:51.004414 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:19:51.004428 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:19:51.004441 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:19:51.004450 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:19:51.004472 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:19:51.004485 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 8 00:19:51.004497 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:19:51.004510 kernel: Console: colour VGA+ 80x25 Nov 8 00:19:51.004527 kernel: printk: console [tty0] enabled Nov 8 00:19:51.004539 kernel: printk: console [ttyS0] enabled Nov 8 00:19:51.004548 kernel: ACPI: Core revision 20230628 Nov 8 00:19:51.004556 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:19:51.004565 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:19:51.004578 kernel: x2apic enabled Nov 8 00:19:51.004588 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:19:51.004603 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:19:51.004615 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Nov 8 00:19:51.004628 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Nov 8 00:19:51.004639 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 8 00:19:51.004648 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 8 00:19:51.004656 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:19:51.004676 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:19:51.004685 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:19:51.004705 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 8 00:19:51.005202 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:19:51.005479 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:19:51.005489 kernel: MDS: Mitigation: Clear CPU buffers Nov 8 00:19:51.005498 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:19:51.005507 kernel: active return thunk: its_return_thunk Nov 8 00:19:51.005522 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:19:51.005535 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:19:51.005544 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:19:51.005553 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:19:51.005562 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:19:51.005571 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 8 00:19:51.005581 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:19:51.005595 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:19:51.005609 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:19:51.005625 kernel: landlock: Up and running. Nov 8 00:19:51.005638 kernel: SELinux: Initializing. Nov 8 00:19:51.005651 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:19:51.005660 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:19:51.005669 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 8 00:19:51.005678 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:19:51.005687 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:19:51.005850 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:19:51.005860 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 8 00:19:51.005873 kernel: signal: max sigframe size: 1776 Nov 8 00:19:51.005882 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:19:51.005892 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:19:51.005900 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:19:51.005909 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:19:51.005918 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:19:51.005927 kernel: .... node #0, CPUs: #1 Nov 8 00:19:51.005936 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:19:51.005949 kernel: smpboot: Max logical packages: 1 Nov 8 00:19:51.005962 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Nov 8 00:19:51.005977 kernel: devtmpfs: initialized Nov 8 00:19:51.005990 kernel: x86/mm: Memory block size: 128MB Nov 8 00:19:51.006004 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:19:51.006018 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:19:51.006032 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:19:51.006041 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:19:51.006050 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:19:51.006059 kernel: audit: type=2000 audit(1762561190.329:1): state=initialized audit_enabled=0 res=1 Nov 8 00:19:51.006072 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:19:51.006080 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:19:51.006089 kernel: cpuidle: using governor menu Nov 8 00:19:51.006098 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:19:51.006107 kernel: dca service started, version 1.12.1 Nov 8 00:19:51.006116 kernel: PCI: Using configuration type 1 for base access Nov 8 00:19:51.006125 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:19:51.006134 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:19:51.006143 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:19:51.006154 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:19:51.006163 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:19:51.006172 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:19:51.006181 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:19:51.006190 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:19:51.006199 kernel: ACPI: Interpreter enabled Nov 8 00:19:51.006208 kernel: ACPI: PM: (supports S0 S5) Nov 8 00:19:51.006217 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:19:51.006226 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:19:51.006237 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:19:51.006246 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 8 00:19:51.006255 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:19:51.010770 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:19:51.010907 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 8 00:19:51.011007 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 8 00:19:51.011020 kernel: acpiphp: Slot [3] registered Nov 8 00:19:51.011035 kernel: acpiphp: Slot [4] registered Nov 8 00:19:51.011044 kernel: acpiphp: Slot [5] registered Nov 8 00:19:51.011053 kernel: acpiphp: Slot [6] registered Nov 8 00:19:51.011062 kernel: acpiphp: Slot [7] registered Nov 8 00:19:51.011070 kernel: acpiphp: Slot [8] registered Nov 8 00:19:51.011079 kernel: acpiphp: Slot [9] registered Nov 8 00:19:51.011089 kernel: acpiphp: Slot [10] registered Nov 8 00:19:51.011104 kernel: acpiphp: Slot [11] registered Nov 8 00:19:51.011118 kernel: acpiphp: Slot [12] registered Nov 8 00:19:51.011131 kernel: acpiphp: Slot [13] registered Nov 8 00:19:51.011151 kernel: acpiphp: Slot [14] registered Nov 8 00:19:51.011164 kernel: acpiphp: Slot [15] registered Nov 8 00:19:51.011176 kernel: acpiphp: Slot [16] registered Nov 8 00:19:51.011191 kernel: acpiphp: Slot [17] registered Nov 8 00:19:51.011204 kernel: acpiphp: Slot [18] registered Nov 8 00:19:51.011220 kernel: acpiphp: Slot [19] registered Nov 8 00:19:51.011235 kernel: acpiphp: Slot [20] registered Nov 8 00:19:51.011245 kernel: acpiphp: Slot [21] registered Nov 8 00:19:51.011254 kernel: acpiphp: Slot [22] registered Nov 8 00:19:51.011266 kernel: acpiphp: Slot [23] registered Nov 8 00:19:51.011275 kernel: acpiphp: Slot [24] registered Nov 8 00:19:51.011283 kernel: acpiphp: Slot [25] registered Nov 8 00:19:51.011292 kernel: acpiphp: Slot [26] registered Nov 8 00:19:51.011301 kernel: acpiphp: Slot [27] registered Nov 8 00:19:51.011312 kernel: acpiphp: Slot [28] registered Nov 8 00:19:51.011326 kernel: acpiphp: Slot [29] registered Nov 8 00:19:51.011342 kernel: acpiphp: Slot [30] registered Nov 8 00:19:51.011351 kernel: acpiphp: Slot [31] registered Nov 8 00:19:51.011359 kernel: PCI host bridge to bus 0000:00 Nov 8 00:19:51.011532 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:19:51.011653 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:19:51.011769 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:19:51.011856 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 8 00:19:51.011964 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 8 00:19:51.012065 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:19:51.012239 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 8 00:19:51.012367 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 8 00:19:51.012504 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Nov 8 00:19:51.012605 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Nov 8 00:19:51.016777 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 8 00:19:51.016933 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 8 00:19:51.017038 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 8 00:19:51.017146 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 8 00:19:51.017280 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Nov 8 00:19:51.017385 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Nov 8 00:19:51.017509 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 8 00:19:51.017616 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 8 00:19:51.017878 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 8 00:19:51.018007 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Nov 8 00:19:51.018108 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Nov 8 00:19:51.018241 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Nov 8 00:19:51.018347 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Nov 8 00:19:51.018483 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Nov 8 00:19:51.018585 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:19:51.021605 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:19:51.021788 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Nov 8 00:19:51.021897 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Nov 8 00:19:51.021995 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Nov 8 00:19:51.022126 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:19:51.022241 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Nov 8 00:19:51.022358 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Nov 8 00:19:51.022472 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 8 00:19:51.022599 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Nov 8 00:19:51.022708 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Nov 8 00:19:51.022808 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Nov 8 00:19:51.022906 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 8 00:19:51.023029 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Nov 8 00:19:51.023127 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Nov 8 00:19:51.023229 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Nov 8 00:19:51.023323 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Nov 8 00:19:51.023436 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Nov 8 00:19:51.023532 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Nov 8 00:19:51.023639 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Nov 8 00:19:51.026728 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Nov 8 00:19:51.026985 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Nov 8 00:19:51.027155 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Nov 8 00:19:51.027304 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 8 00:19:51.027324 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:19:51.027336 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:19:51.027345 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:19:51.027354 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:19:51.027363 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 8 00:19:51.027372 kernel: iommu: Default domain type: Translated Nov 8 00:19:51.027418 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:19:51.027433 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:19:51.027448 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:19:51.027463 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 8 00:19:51.027477 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 8 00:19:51.027649 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 8 00:19:51.028930 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 8 00:19:51.029067 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:19:51.029098 kernel: vgaarb: loaded Nov 8 00:19:51.029115 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:19:51.029129 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:19:51.029138 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:19:51.029148 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:19:51.029157 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:19:51.029172 kernel: pnp: PnP ACPI init Nov 8 00:19:51.029184 kernel: pnp: PnP ACPI: found 4 devices Nov 8 00:19:51.029194 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:19:51.029207 kernel: NET: Registered PF_INET protocol family Nov 8 00:19:51.029220 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:19:51.029233 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 8 00:19:51.029242 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:19:51.029251 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:19:51.029262 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:19:51.029276 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 8 00:19:51.029290 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:19:51.029305 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:19:51.029319 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:19:51.029328 kernel: NET: Registered PF_XDP protocol family Nov 8 00:19:51.029456 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:19:51.029561 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:19:51.029655 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:19:51.030821 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 8 00:19:51.030931 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 8 00:19:51.031092 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 8 00:19:51.031237 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 8 00:19:51.031253 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 8 00:19:51.031386 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 32599 usecs Nov 8 00:19:51.031433 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:19:51.031481 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:19:51.031491 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Nov 8 00:19:51.031500 kernel: Initialise system trusted keyrings Nov 8 00:19:51.031510 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 8 00:19:51.031519 kernel: Key type asymmetric registered Nov 8 00:19:51.031532 kernel: Asymmetric key parser 'x509' registered Nov 8 00:19:51.031542 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:19:51.031551 kernel: io scheduler mq-deadline registered Nov 8 00:19:51.031560 kernel: io scheduler kyber registered Nov 8 00:19:51.031569 kernel: io scheduler bfq registered Nov 8 00:19:51.031578 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:19:51.031587 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 8 00:19:51.031596 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 8 00:19:51.031605 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 8 00:19:51.031617 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:19:51.031626 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:19:51.031635 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:19:51.031644 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:19:51.031653 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:19:51.031662 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:19:51.033873 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 8 00:19:51.033978 kernel: rtc_cmos 00:03: registered as rtc0 Nov 8 00:19:51.034082 kernel: rtc_cmos 00:03: setting system clock to 2025-11-08T00:19:50 UTC (1762561190) Nov 8 00:19:51.034173 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 8 00:19:51.034184 kernel: intel_pstate: CPU model not supported Nov 8 00:19:51.034194 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:19:51.034203 kernel: Segment Routing with IPv6 Nov 8 00:19:51.034212 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:19:51.034221 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:19:51.034230 kernel: Key type dns_resolver registered Nov 8 00:19:51.034239 kernel: IPI shorthand broadcast: enabled Nov 8 00:19:51.034251 kernel: sched_clock: Marking stable (919004173, 145145027)->(1177477865, -113328665) Nov 8 00:19:51.034259 kernel: registered taskstats version 1 Nov 8 00:19:51.034268 kernel: Loading compiled-in X.509 certificates Nov 8 00:19:51.034277 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:19:51.034286 kernel: Key type .fscrypt registered Nov 8 00:19:51.034294 kernel: Key type fscrypt-provisioning registered Nov 8 00:19:51.034303 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:19:51.034312 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:19:51.034321 kernel: ima: No architecture policies found Nov 8 00:19:51.034332 kernel: clk: Disabling unused clocks Nov 8 00:19:51.034341 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:19:51.034350 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:19:51.034359 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:19:51.034385 kernel: Run /init as init process Nov 8 00:19:51.034398 kernel: with arguments: Nov 8 00:19:51.034407 kernel: /init Nov 8 00:19:51.034416 kernel: with environment: Nov 8 00:19:51.034425 kernel: HOME=/ Nov 8 00:19:51.034437 kernel: TERM=linux Nov 8 00:19:51.034449 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:19:51.034460 systemd[1]: Detected virtualization kvm. Nov 8 00:19:51.034470 systemd[1]: Detected architecture x86-64. Nov 8 00:19:51.034480 systemd[1]: Running in initrd. Nov 8 00:19:51.034489 systemd[1]: No hostname configured, using default hostname. Nov 8 00:19:51.034498 systemd[1]: Hostname set to . Nov 8 00:19:51.034511 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:19:51.034520 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:19:51.034530 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:19:51.034540 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:19:51.034551 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:19:51.034561 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:19:51.034570 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:19:51.034580 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:19:51.034594 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:19:51.034604 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:19:51.034614 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:19:51.034625 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:19:51.034634 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:19:51.034644 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:19:51.034654 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:19:51.034666 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:19:51.034676 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:19:51.034686 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:19:51.034707 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:19:51.034740 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:19:51.034758 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:19:51.034772 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:19:51.034785 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:19:51.034800 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:19:51.034814 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:19:51.034829 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:19:51.034846 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:19:51.034864 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:19:51.034873 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:19:51.034886 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:19:51.034896 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:19:51.034906 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:19:51.034916 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:19:51.034957 systemd-journald[184]: Collecting audit messages is disabled. Nov 8 00:19:51.034984 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:19:51.035001 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:19:51.035018 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:19:51.035039 systemd-journald[184]: Journal started Nov 8 00:19:51.035072 systemd-journald[184]: Runtime Journal (/run/log/journal/bd9c6942fc22442d980d66ded879e368) is 4.9M, max 39.3M, 34.4M free. Nov 8 00:19:51.035862 systemd-modules-load[186]: Inserted module 'overlay' Nov 8 00:19:51.091304 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:19:51.091333 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:19:51.091348 kernel: Bridge firewalling registered Nov 8 00:19:51.091375 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:19:51.067598 systemd-modules-load[186]: Inserted module 'br_netfilter' Nov 8 00:19:51.096732 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:19:51.098134 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:19:51.102778 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:19:51.105905 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:19:51.108085 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:19:51.123113 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:19:51.134634 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:19:51.140405 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:19:51.149198 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:19:51.154256 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:19:51.160215 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:19:51.183176 systemd-resolved[215]: Positive Trust Anchors: Nov 8 00:19:51.183191 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:19:51.183228 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:19:51.188629 systemd-resolved[215]: Defaulting to hostname 'linux'. Nov 8 00:19:51.193061 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:19:51.195442 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:19:51.201875 dracut-cmdline[221]: dracut-dracut-053 Nov 8 00:19:51.205472 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:19:51.325738 kernel: SCSI subsystem initialized Nov 8 00:19:51.339729 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:19:51.355786 kernel: iscsi: registered transport (tcp) Nov 8 00:19:51.385997 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:19:51.386126 kernel: QLogic iSCSI HBA Driver Nov 8 00:19:51.447072 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:19:51.453991 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:19:51.494018 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:19:51.494146 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:19:51.494168 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:19:51.545800 kernel: raid6: avx2x4 gen() 24846 MB/s Nov 8 00:19:51.561777 kernel: raid6: avx2x2 gen() 22858 MB/s Nov 8 00:19:51.578948 kernel: raid6: avx2x1 gen() 22502 MB/s Nov 8 00:19:51.579076 kernel: raid6: using algorithm avx2x4 gen() 24846 MB/s Nov 8 00:19:51.597752 kernel: raid6: .... xor() 5741 MB/s, rmw enabled Nov 8 00:19:51.597876 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:19:51.620742 kernel: xor: automatically using best checksumming function avx Nov 8 00:19:51.857758 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:19:51.876820 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:19:51.893056 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:19:51.910080 systemd-udevd[404]: Using default interface naming scheme 'v255'. Nov 8 00:19:51.917557 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:19:51.927009 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:19:51.944935 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Nov 8 00:19:51.984098 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:19:51.995054 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:19:52.055607 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:19:52.066221 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:19:52.104279 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:19:52.108246 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:19:52.109361 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:19:52.111296 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:19:52.119065 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:19:52.146020 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:19:52.182768 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 8 00:19:52.198780 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 8 00:19:52.209269 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:19:52.209982 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:19:52.211111 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:19:52.219858 kernel: scsi host0: Virtio SCSI HBA Nov 8 00:19:52.216867 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:19:52.217742 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:19:52.237831 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:19:52.237866 kernel: GPT:9289727 != 125829119 Nov 8 00:19:52.237887 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:19:52.237907 kernel: GPT:9289727 != 125829119 Nov 8 00:19:52.237927 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:19:52.237947 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:19:52.237968 kernel: libata version 3.00 loaded. Nov 8 00:19:52.217972 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:19:52.218606 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:19:52.226627 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:19:52.255076 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 8 00:19:52.255360 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 8 00:19:52.255541 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 8 00:19:52.257897 kernel: scsi host1: ata_piix Nov 8 00:19:52.258163 kernel: scsi host2: ata_piix Nov 8 00:19:52.259946 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Nov 8 00:19:52.265943 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Nov 8 00:19:52.266026 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:19:52.266048 kernel: AES CTR mode by8 optimization enabled Nov 8 00:19:52.278723 kernel: ACPI: bus type USB registered Nov 8 00:19:52.294750 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (460) Nov 8 00:19:52.300724 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (458) Nov 8 00:19:52.306735 kernel: usbcore: registered new interface driver usbfs Nov 8 00:19:52.306809 kernel: usbcore: registered new interface driver hub Nov 8 00:19:52.306833 kernel: usbcore: registered new device driver usb Nov 8 00:19:52.332661 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 8 00:19:52.385289 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 8 00:19:52.386436 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:19:52.400623 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:19:52.405071 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 8 00:19:52.405740 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 8 00:19:52.417093 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:19:52.423679 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:19:52.433256 disk-uuid[532]: Primary Header is updated. Nov 8 00:19:52.433256 disk-uuid[532]: Secondary Entries is updated. Nov 8 00:19:52.433256 disk-uuid[532]: Secondary Header is updated. Nov 8 00:19:52.444765 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:19:52.462343 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:19:52.471759 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:19:52.472063 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:19:52.490736 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 8 00:19:52.494894 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 8 00:19:52.495173 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 8 00:19:52.496871 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 8 00:19:52.500867 kernel: hub 1-0:1.0: USB hub found Nov 8 00:19:52.501150 kernel: hub 1-0:1.0: 2 ports detected Nov 8 00:19:53.469917 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:19:53.470440 disk-uuid[533]: The operation has completed successfully. Nov 8 00:19:53.521426 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:19:53.521542 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:19:53.536989 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:19:53.542208 sh[564]: Success Nov 8 00:19:53.561816 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:19:53.634226 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:19:53.644900 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:19:53.648115 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:19:53.680763 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:19:53.680881 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:19:53.680904 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:19:53.680947 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:19:53.681121 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:19:53.693245 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:19:53.694973 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:19:53.701092 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:19:53.703970 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:19:53.724640 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:19:53.724749 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:19:53.724768 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:19:53.730733 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:19:53.745479 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:19:53.745129 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:19:53.755454 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:19:53.765082 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:19:53.907172 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:19:53.917186 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:19:53.921311 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:19:53.918223 ignition[655]: Ignition 2.19.0 Nov 8 00:19:53.918231 ignition[655]: Stage: fetch-offline Nov 8 00:19:53.918284 ignition[655]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:19:53.918295 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:19:53.918431 ignition[655]: parsed url from cmdline: "" Nov 8 00:19:53.918435 ignition[655]: no config URL provided Nov 8 00:19:53.918441 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:19:53.918453 ignition[655]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:19:53.918459 ignition[655]: failed to fetch config: resource requires networking Nov 8 00:19:53.918944 ignition[655]: Ignition finished successfully Nov 8 00:19:53.967261 systemd-networkd[752]: lo: Link UP Nov 8 00:19:53.968194 systemd-networkd[752]: lo: Gained carrier Nov 8 00:19:53.971399 systemd-networkd[752]: Enumeration completed Nov 8 00:19:53.971891 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 8 00:19:53.971896 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 8 00:19:53.972994 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:19:53.974787 systemd[1]: Reached target network.target - Network. Nov 8 00:19:53.975021 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:19:53.975027 systemd-networkd[752]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:19:53.976034 systemd-networkd[752]: eth0: Link UP Nov 8 00:19:53.976040 systemd-networkd[752]: eth0: Gained carrier Nov 8 00:19:53.976052 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 8 00:19:53.984980 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:19:53.987270 systemd-networkd[752]: eth1: Link UP Nov 8 00:19:53.987283 systemd-networkd[752]: eth1: Gained carrier Nov 8 00:19:53.987300 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:19:54.002807 systemd-networkd[752]: eth1: DHCPv4 address 10.124.0.24/20 acquired from 169.254.169.253 Nov 8 00:19:54.007023 ignition[755]: Ignition 2.19.0 Nov 8 00:19:54.007039 ignition[755]: Stage: fetch Nov 8 00:19:54.007267 ignition[755]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:19:54.009403 systemd-networkd[752]: eth0: DHCPv4 address 64.23.144.43/20, gateway 64.23.144.1 acquired from 169.254.169.253 Nov 8 00:19:54.007286 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:19:54.007405 ignition[755]: parsed url from cmdline: "" Nov 8 00:19:54.007408 ignition[755]: no config URL provided Nov 8 00:19:54.007415 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:19:54.007423 ignition[755]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:19:54.007446 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 8 00:19:54.007639 ignition[755]: GET error: Get "http://169.254.169.254/metadata/v1/user-data": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:19:54.207887 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #2 Nov 8 00:19:54.230399 ignition[755]: GET result: OK Nov 8 00:19:54.230613 ignition[755]: parsing config with SHA512: 4b1e3a395daadaeac546852197e499ce3be13ca9aa21c49a39e78aacaccb2f650459c6c6e8d5ba19feb71501fe432bcb4e0fd2c01c4aac86af53839fb704fe3d Nov 8 00:19:54.236747 unknown[755]: fetched base config from "system" Nov 8 00:19:54.236759 unknown[755]: fetched base config from "system" Nov 8 00:19:54.237436 ignition[755]: fetch: fetch complete Nov 8 00:19:54.236767 unknown[755]: fetched user config from "digitalocean" Nov 8 00:19:54.237443 ignition[755]: fetch: fetch passed Nov 8 00:19:54.240671 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:19:54.237520 ignition[755]: Ignition finished successfully Nov 8 00:19:54.247078 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:19:54.277307 ignition[763]: Ignition 2.19.0 Nov 8 00:19:54.277320 ignition[763]: Stage: kargs Nov 8 00:19:54.277501 ignition[763]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:19:54.277511 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:19:54.279613 ignition[763]: kargs: kargs passed Nov 8 00:19:54.279671 ignition[763]: Ignition finished successfully Nov 8 00:19:54.281421 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:19:54.287927 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:19:54.309043 ignition[769]: Ignition 2.19.0 Nov 8 00:19:54.309054 ignition[769]: Stage: disks Nov 8 00:19:54.309252 ignition[769]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:19:54.312075 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:19:54.309264 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:19:54.318270 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:19:54.310206 ignition[769]: disks: disks passed Nov 8 00:19:54.319265 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:19:54.310255 ignition[769]: Ignition finished successfully Nov 8 00:19:54.320318 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:19:54.321383 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:19:54.322252 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:19:54.328997 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:19:54.348147 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:19:54.352366 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:19:54.361900 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:19:54.465764 kernel: EXT4-fs (vda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:19:54.465653 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:19:54.466775 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:19:54.479890 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:19:54.482627 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:19:54.489330 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Nov 8 00:19:54.493753 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (785) Nov 8 00:19:54.493908 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:19:54.494493 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:19:54.504289 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:19:54.504317 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:19:54.504330 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:19:54.494530 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:19:54.506571 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:19:54.510870 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:19:54.515718 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:19:54.526793 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:19:54.577753 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:19:54.581734 coreos-metadata[788]: Nov 08 00:19:54.581 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 8 00:19:54.590211 initrd-setup-root[822]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:19:54.591828 coreos-metadata[787]: Nov 08 00:19:54.591 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 8 00:19:54.594990 coreos-metadata[788]: Nov 08 00:19:54.594 INFO Fetch successful Nov 8 00:19:54.601027 initrd-setup-root[829]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:19:54.602494 coreos-metadata[788]: Nov 08 00:19:54.601 INFO wrote hostname ci-4081.3.6-n-01b3a4b0a8 to /sysroot/etc/hostname Nov 8 00:19:54.604437 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:19:54.606105 coreos-metadata[787]: Nov 08 00:19:54.605 INFO Fetch successful Nov 8 00:19:54.608188 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:19:54.613024 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Nov 8 00:19:54.613142 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Nov 8 00:19:54.706008 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:19:54.709843 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:19:54.714866 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:19:54.722780 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:19:54.724175 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:19:54.746764 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:19:54.766722 ignition[906]: INFO : Ignition 2.19.0 Nov 8 00:19:54.766722 ignition[906]: INFO : Stage: mount Nov 8 00:19:54.767908 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:19:54.767908 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:19:54.769811 ignition[906]: INFO : mount: mount passed Nov 8 00:19:54.769811 ignition[906]: INFO : Ignition finished successfully Nov 8 00:19:54.770412 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:19:54.773837 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:19:54.786611 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:19:54.806754 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (918) Nov 8 00:19:54.809030 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:19:54.809075 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:19:54.811139 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:19:54.815742 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:19:54.817172 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:19:54.854176 ignition[935]: INFO : Ignition 2.19.0 Nov 8 00:19:54.854176 ignition[935]: INFO : Stage: files Nov 8 00:19:54.855491 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:19:54.855491 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:19:54.857171 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:19:54.858002 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:19:54.858002 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:19:54.862045 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:19:54.862843 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:19:54.862843 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:19:54.862629 unknown[935]: wrote ssh authorized keys file for user: core Nov 8 00:19:54.865337 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:19:54.865337 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:19:54.865337 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:19:54.865337 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 00:19:54.971357 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 8 00:19:55.071354 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:19:55.071354 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:19:55.072977 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:19:55.072977 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:19:55.072977 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:19:55.072977 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:19:55.072977 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:19:55.072977 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:19:55.072977 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:19:55.072977 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:19:55.072977 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:19:55.072977 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:19:55.072977 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:19:55.072977 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:19:55.072977 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 00:19:55.305884 systemd-networkd[752]: eth0: Gained IPv6LL Nov 8 00:19:55.553907 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 8 00:19:55.563092 systemd-networkd[752]: eth1: Gained IPv6LL Nov 8 00:19:58.466562 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:19:58.466562 ignition[935]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 8 00:19:58.468569 ignition[935]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:19:58.468569 ignition[935]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:19:58.468569 ignition[935]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 8 00:19:58.468569 ignition[935]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 8 00:19:58.468569 ignition[935]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:19:58.468569 ignition[935]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:19:58.468569 ignition[935]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 8 00:19:58.468569 ignition[935]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:19:58.468569 ignition[935]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:19:58.468569 ignition[935]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:19:58.468569 ignition[935]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:19:58.478042 ignition[935]: INFO : files: files passed Nov 8 00:19:58.478042 ignition[935]: INFO : Ignition finished successfully Nov 8 00:19:58.470000 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:19:58.483013 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:19:58.486897 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:19:58.488300 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:19:58.489514 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:19:58.509888 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:19:58.509888 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:19:58.511485 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:19:58.511843 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:19:58.513396 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:19:58.519960 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:19:58.558715 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:19:58.558860 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:19:58.560012 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:19:58.560897 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:19:58.561867 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:19:58.563891 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:19:58.597075 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:19:58.602969 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:19:58.617132 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:19:58.618348 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:19:58.619600 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:19:58.620562 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:19:58.620709 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:19:58.622362 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:19:58.622983 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:19:58.623819 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:19:58.624667 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:19:58.625622 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:19:58.626594 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:19:58.627465 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:19:58.628433 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:19:58.629316 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:19:58.630178 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:19:58.630953 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:19:58.631087 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:19:58.632181 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:19:58.633317 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:19:58.634349 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:19:58.634501 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:19:58.635460 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:19:58.635675 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:19:58.636875 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:19:58.637096 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:19:58.638221 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:19:58.638379 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:19:58.639018 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:19:58.639151 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:19:58.646964 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:19:58.648166 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:19:58.648352 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:19:58.651930 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:19:58.652929 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:19:58.653572 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:19:58.655783 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:19:58.655909 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:19:58.666635 ignition[988]: INFO : Ignition 2.19.0 Nov 8 00:19:58.666635 ignition[988]: INFO : Stage: umount Nov 8 00:19:58.666635 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:19:58.666635 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:19:58.675770 ignition[988]: INFO : umount: umount passed Nov 8 00:19:58.675770 ignition[988]: INFO : Ignition finished successfully Nov 8 00:19:58.672046 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:19:58.672151 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:19:58.676345 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:19:58.677032 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:19:58.681942 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:19:58.682536 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:19:58.683656 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:19:58.683726 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:19:58.684829 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:19:58.684874 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:19:58.685998 systemd[1]: Stopped target network.target - Network. Nov 8 00:19:58.686972 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:19:58.687021 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:19:58.688017 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:19:58.690769 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:19:58.696800 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:19:58.697334 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:19:58.697751 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:19:58.698241 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:19:58.698291 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:19:58.699211 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:19:58.699251 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:19:58.700013 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:19:58.700066 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:19:58.700907 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:19:58.700953 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:19:58.701901 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:19:58.702819 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:19:58.704640 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:19:58.705219 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:19:58.705318 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:19:58.705844 systemd-networkd[752]: eth1: DHCPv6 lease lost Nov 8 00:19:58.706860 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:19:58.706993 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:19:58.709786 systemd-networkd[752]: eth0: DHCPv6 lease lost Nov 8 00:19:58.711772 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:19:58.711895 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:19:58.713969 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:19:58.714070 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:19:58.718291 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:19:58.718374 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:19:58.724889 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:19:58.725442 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:19:58.725534 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:19:58.726169 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:19:58.726227 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:19:58.727213 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:19:58.727260 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:19:58.729921 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:19:58.729984 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:19:58.730593 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:19:58.747628 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:19:58.748739 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:19:58.749809 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:19:58.750020 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:19:58.752766 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:19:58.752847 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:19:58.754169 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:19:58.754233 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:19:58.755495 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:19:58.755570 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:19:58.757114 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:19:58.757186 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:19:58.758227 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:19:58.758297 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:19:58.764003 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:19:58.766057 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:19:58.766145 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:19:58.767173 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:19:58.767223 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:19:58.776321 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:19:58.776497 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:19:58.778009 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:19:58.783875 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:19:58.793097 systemd[1]: Switching root. Nov 8 00:19:58.829528 systemd-journald[184]: Journal stopped Nov 8 00:20:00.220733 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Nov 8 00:20:00.220846 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:20:00.220863 kernel: SELinux: policy capability open_perms=1 Nov 8 00:20:00.220876 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:20:00.220888 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:20:00.220904 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:20:00.220916 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:20:00.220938 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:20:00.220962 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:20:00.220980 kernel: audit: type=1403 audit(1762561199.018:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:20:00.221008 systemd[1]: Successfully loaded SELinux policy in 39.891ms. Nov 8 00:20:00.221034 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.863ms. Nov 8 00:20:00.221053 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:20:00.221067 systemd[1]: Detected virtualization kvm. Nov 8 00:20:00.221080 systemd[1]: Detected architecture x86-64. Nov 8 00:20:00.221092 systemd[1]: Detected first boot. Nov 8 00:20:00.221109 systemd[1]: Hostname set to . Nov 8 00:20:00.221126 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:20:00.221140 zram_generator::config[1047]: No configuration found. Nov 8 00:20:00.221154 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:20:00.221166 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:20:00.221178 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 8 00:20:00.221205 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:20:00.221225 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:20:00.221243 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:20:00.221264 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:20:00.221283 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:20:00.221302 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:20:00.221326 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:20:00.221347 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:20:00.221365 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:20:00.221383 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:20:00.221402 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:20:00.221431 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:20:00.221452 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:20:00.221475 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:20:00.221489 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:20:00.221501 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:20:00.221514 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:20:00.221527 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:20:00.221540 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:20:00.221557 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:20:00.221570 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:20:00.221583 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:20:00.221596 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:20:00.221609 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:20:00.221622 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:20:00.221635 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:20:00.221647 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:20:00.221664 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:20:00.221677 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:20:00.221690 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:20:00.221717 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:20:00.221730 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:20:00.221744 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:00.221758 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:20:00.221780 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:20:00.221799 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:20:00.221822 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:20:00.221840 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:20:00.221858 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:20:00.221875 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:20:00.221893 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:20:00.221911 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:20:00.221929 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:20:00.221948 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:20:00.221965 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:20:00.221979 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:20:00.221999 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 8 00:20:00.222022 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 8 00:20:00.222042 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:20:00.222055 kernel: fuse: init (API version 7.39) Nov 8 00:20:00.222068 kernel: ACPI: bus type drm_connector registered Nov 8 00:20:00.222080 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:20:00.222093 kernel: loop: module loaded Nov 8 00:20:00.222109 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:20:00.222121 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:20:00.222135 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:20:00.222148 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:00.222211 systemd-journald[1145]: Collecting audit messages is disabled. Nov 8 00:20:00.222238 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:20:00.222253 systemd-journald[1145]: Journal started Nov 8 00:20:00.222282 systemd-journald[1145]: Runtime Journal (/run/log/journal/bd9c6942fc22442d980d66ded879e368) is 4.9M, max 39.3M, 34.4M free. Nov 8 00:20:00.227277 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:20:00.229836 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:20:00.230761 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:20:00.231550 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:20:00.232327 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:20:00.238995 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:20:00.240212 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:20:00.241414 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:20:00.242600 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:20:00.242919 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:20:00.244878 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:20:00.245177 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:20:00.246446 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:20:00.246754 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:20:00.248153 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:20:00.248448 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:20:00.249623 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:20:00.250239 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:20:00.251359 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:20:00.251808 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:20:00.255847 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:20:00.257272 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:20:00.260435 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:20:00.282616 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:20:00.290942 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:20:00.309073 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:20:00.311924 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:20:00.330065 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:20:00.342035 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:20:00.344652 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:20:00.356007 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:20:00.357105 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:20:00.369026 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:20:00.383007 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:20:00.396083 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:20:00.397104 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:20:00.399876 systemd-journald[1145]: Time spent on flushing to /var/log/journal/bd9c6942fc22442d980d66ded879e368 is 86.729ms for 975 entries. Nov 8 00:20:00.399876 systemd-journald[1145]: System Journal (/var/log/journal/bd9c6942fc22442d980d66ded879e368) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:20:00.518156 systemd-journald[1145]: Received client request to flush runtime journal. Nov 8 00:20:00.402326 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:20:00.430001 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:20:00.461644 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:20:00.464157 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:20:00.484350 udevadm[1197]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:20:00.494427 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:20:00.523819 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:20:00.534731 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Nov 8 00:20:00.534761 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Nov 8 00:20:00.543279 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:20:00.559095 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:20:00.619395 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:20:00.627137 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:20:00.667960 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Nov 8 00:20:00.667995 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Nov 8 00:20:00.679761 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:20:01.616297 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:20:01.626146 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:20:01.681022 systemd-udevd[1219]: Using default interface naming scheme 'v255'. Nov 8 00:20:01.726942 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:20:01.737981 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:20:01.763061 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:20:01.837453 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:20:01.901146 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 8 00:20:01.947165 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:01.947461 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:20:01.957731 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1233) Nov 8 00:20:01.963171 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:20:01.972302 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:20:01.979147 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:20:01.981934 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:20:01.982008 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:20:01.982085 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:01.987208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:20:01.987515 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:20:01.993504 systemd-networkd[1222]: lo: Link UP Nov 8 00:20:01.993524 systemd-networkd[1222]: lo: Gained carrier Nov 8 00:20:01.997416 systemd-networkd[1222]: Enumeration completed Nov 8 00:20:01.997781 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:20:02.000451 systemd-networkd[1222]: eth0: Configuring with /run/systemd/network/10-4e:17:44:d5:a4:67.network. Nov 8 00:20:02.001560 systemd-networkd[1222]: eth1: Configuring with /run/systemd/network/10-56:58:33:e3:98:95.network. Nov 8 00:20:02.002392 systemd-networkd[1222]: eth0: Link UP Nov 8 00:20:02.002399 systemd-networkd[1222]: eth0: Gained carrier Nov 8 00:20:02.006233 systemd-networkd[1222]: eth1: Link UP Nov 8 00:20:02.006249 systemd-networkd[1222]: eth1: Gained carrier Nov 8 00:20:02.006815 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:20:02.020637 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:20:02.021021 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:20:02.026149 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:20:02.043601 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:20:02.050052 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:20:02.065654 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:20:02.165964 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 8 00:20:02.179812 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:20:02.190841 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 8 00:20:02.240816 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 8 00:20:02.288516 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:20:02.288764 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:20:02.312315 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:20:02.320740 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 8 00:20:02.322726 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 8 00:20:02.337742 kernel: Console: switching to colour dummy device 80x25 Nov 8 00:20:02.341310 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 8 00:20:02.341470 kernel: [drm] features: -context_init Nov 8 00:20:02.350977 kernel: [drm] number of scanouts: 1 Nov 8 00:20:02.351096 kernel: [drm] number of cap sets: 0 Nov 8 00:20:02.378823 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Nov 8 00:20:02.380281 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:20:02.380735 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:02.398552 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 8 00:20:02.399904 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 00:20:02.400773 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:20:02.415744 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 8 00:20:02.438065 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:20:02.438417 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:02.483757 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:20:02.558821 kernel: EDAC MC: Ver: 3.0.0 Nov 8 00:20:02.583366 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:20:02.594253 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:20:02.619928 lvm[1279]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:20:02.628058 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:02.660965 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:20:02.662489 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:20:02.671249 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:20:02.688963 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:20:02.723573 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:20:02.726514 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:20:02.733959 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 8 00:20:02.734362 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:20:02.734692 systemd[1]: Reached target machines.target - Containers. Nov 8 00:20:02.738108 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:20:02.761959 kernel: ISO 9660 Extensions: RRIP_1991A Nov 8 00:20:02.765332 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 8 00:20:02.767846 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:20:02.770287 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:20:02.779039 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:20:02.790023 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:20:02.793838 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:20:02.797956 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:20:02.810065 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:20:02.817025 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:20:02.823684 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:20:02.839084 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:20:02.843532 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:20:02.862019 kernel: loop0: detected capacity change from 0 to 142488 Nov 8 00:20:02.906781 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:20:02.933868 kernel: loop1: detected capacity change from 0 to 140768 Nov 8 00:20:02.999367 kernel: loop2: detected capacity change from 0 to 8 Nov 8 00:20:03.030747 kernel: loop3: detected capacity change from 0 to 224512 Nov 8 00:20:03.078282 kernel: loop4: detected capacity change from 0 to 142488 Nov 8 00:20:03.104730 kernel: loop5: detected capacity change from 0 to 140768 Nov 8 00:20:03.132509 kernel: loop6: detected capacity change from 0 to 8 Nov 8 00:20:03.132671 kernel: loop7: detected capacity change from 0 to 224512 Nov 8 00:20:03.144151 (sd-merge)[1311]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Nov 8 00:20:03.145106 (sd-merge)[1311]: Merged extensions into '/usr'. Nov 8 00:20:03.155402 systemd[1]: Reloading requested from client PID 1300 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:20:03.155668 systemd[1]: Reloading... Nov 8 00:20:03.270936 zram_generator::config[1340]: No configuration found. Nov 8 00:20:03.466443 ldconfig[1297]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:20:03.497897 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:20:03.587628 systemd[1]: Reloading finished in 430 ms. Nov 8 00:20:03.609235 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:20:03.613180 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:20:03.628012 systemd[1]: Starting ensure-sysext.service... Nov 8 00:20:03.637970 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:20:03.649657 systemd[1]: Reloading requested from client PID 1390 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:20:03.649681 systemd[1]: Reloading... Nov 8 00:20:03.691041 systemd-networkd[1222]: eth1: Gained IPv6LL Nov 8 00:20:03.697559 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:20:03.699857 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:20:03.701632 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:20:03.702407 systemd-tmpfiles[1392]: ACLs are not supported, ignoring. Nov 8 00:20:03.704725 systemd-tmpfiles[1392]: ACLs are not supported, ignoring. Nov 8 00:20:03.709484 systemd-tmpfiles[1392]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:20:03.710386 systemd-tmpfiles[1392]: Skipping /boot Nov 8 00:20:03.725857 systemd-tmpfiles[1392]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:20:03.726078 systemd-tmpfiles[1392]: Skipping /boot Nov 8 00:20:03.778420 zram_generator::config[1422]: No configuration found. Nov 8 00:20:03.818972 systemd-networkd[1222]: eth0: Gained IPv6LL Nov 8 00:20:03.935146 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:20:04.035646 systemd[1]: Reloading finished in 385 ms. Nov 8 00:20:04.056759 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:20:04.077679 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:20:04.095112 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:20:04.108160 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:20:04.117267 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:20:04.131891 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:20:04.146405 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:20:04.156395 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:04.156783 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:20:04.181152 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:20:04.197219 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:20:04.205177 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:20:04.208466 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:20:04.219573 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:04.225237 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:20:04.232286 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:20:04.232616 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:20:04.235504 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:20:04.236781 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:20:04.250509 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:20:04.250933 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:20:04.270102 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:20:04.271909 augenrules[1503]: No rules Nov 8 00:20:04.286004 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:20:04.311120 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:20:04.323193 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:04.323578 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:20:04.329202 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:20:04.342151 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:20:04.354141 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:20:04.355410 systemd-resolved[1484]: Positive Trust Anchors: Nov 8 00:20:04.355425 systemd-resolved[1484]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:20:04.355476 systemd-resolved[1484]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:20:04.360008 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:20:04.361069 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:20:04.370020 systemd-resolved[1484]: Using system hostname 'ci-4081.3.6-n-01b3a4b0a8'. Nov 8 00:20:04.390009 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:20:04.393624 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:20:04.393733 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:04.397378 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:20:04.400001 systemd[1]: Finished ensure-sysext.service. Nov 8 00:20:04.401948 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:20:04.402187 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:20:04.403895 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:20:04.404093 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:20:04.407172 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:20:04.407437 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:20:04.412434 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:20:04.413993 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:20:04.419282 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:20:04.427250 systemd[1]: Reached target network.target - Network. Nov 8 00:20:04.429064 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:20:04.429753 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:20:04.430270 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:20:04.430478 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:20:04.438198 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:20:04.514020 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:20:04.514900 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:20:04.515525 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:20:04.517780 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:20:04.518368 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:20:04.518951 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:20:04.518998 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:20:04.519476 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:20:04.520442 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:20:04.521243 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:20:04.521640 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:20:04.525948 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:20:04.529291 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:20:04.532327 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:20:04.536613 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:20:04.538447 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:20:04.538974 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:20:04.540448 systemd[1]: System is tainted: cgroupsv1 Nov 8 00:20:04.540641 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:20:04.540686 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:20:04.543850 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:20:04.554979 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:20:04.572069 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:20:04.577862 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:20:04.591977 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:20:04.593024 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:20:04.607421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:04.612903 coreos-metadata[1541]: Nov 08 00:20:04.612 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 8 00:20:04.614402 dbus-daemon[1542]: [system] SELinux support is enabled Nov 8 00:20:04.618798 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:20:04.625924 jq[1546]: false Nov 8 00:20:04.626733 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:20:04.637763 coreos-metadata[1541]: Nov 08 00:20:04.635 INFO Fetch successful Nov 8 00:20:04.650849 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:20:04.663247 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:20:04.680956 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:20:04.691897 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:20:04.692893 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:20:04.700842 extend-filesystems[1547]: Found loop4 Nov 8 00:20:04.700842 extend-filesystems[1547]: Found loop5 Nov 8 00:20:04.700842 extend-filesystems[1547]: Found loop6 Nov 8 00:20:04.700842 extend-filesystems[1547]: Found loop7 Nov 8 00:20:04.700842 extend-filesystems[1547]: Found vda Nov 8 00:20:04.700842 extend-filesystems[1547]: Found vda1 Nov 8 00:20:04.700842 extend-filesystems[1547]: Found vda2 Nov 8 00:20:04.700842 extend-filesystems[1547]: Found vda3 Nov 8 00:20:04.700842 extend-filesystems[1547]: Found usr Nov 8 00:20:04.700842 extend-filesystems[1547]: Found vda4 Nov 8 00:20:04.700842 extend-filesystems[1547]: Found vda6 Nov 8 00:20:04.700842 extend-filesystems[1547]: Found vda7 Nov 8 00:20:04.700842 extend-filesystems[1547]: Found vda9 Nov 8 00:20:04.700842 extend-filesystems[1547]: Checking size of /dev/vda9 Nov 8 00:20:04.707195 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:20:04.731961 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:20:04.734248 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:20:05.620817 systemd-timesyncd[1536]: Contacted time server 207.58.172.126:123 (0.flatcar.pool.ntp.org). Nov 8 00:20:05.622656 systemd-resolved[1484]: Clock change detected. Flushing caches. Nov 8 00:20:05.624596 systemd-timesyncd[1536]: Initial clock synchronization to Sat 2025-11-08 00:20:05.620660 UTC. Nov 8 00:20:05.626067 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:20:05.626455 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:20:05.633570 extend-filesystems[1547]: Resized partition /dev/vda9 Nov 8 00:20:05.645613 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 8 00:20:05.642847 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:20:05.645861 jq[1579]: true Nov 8 00:20:05.646230 extend-filesystems[1586]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:20:05.643231 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:20:05.649535 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:20:05.674286 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:20:05.677862 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:20:05.719491 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 8 00:20:05.729946 (ntainerd)[1591]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:20:05.733973 jq[1590]: true Nov 8 00:20:05.742493 update_engine[1568]: I20251108 00:20:05.734130 1568 main.cc:92] Flatcar Update Engine starting Nov 8 00:20:05.753637 extend-filesystems[1586]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 8 00:20:05.753637 extend-filesystems[1586]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 8 00:20:05.753637 extend-filesystems[1586]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 8 00:20:05.748909 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:20:05.781838 extend-filesystems[1547]: Resized filesystem in /dev/vda9 Nov 8 00:20:05.781838 extend-filesystems[1547]: Found vdb Nov 8 00:20:05.801281 update_engine[1568]: I20251108 00:20:05.759902 1568 update_check_scheduler.cc:74] Next update check in 5m18s Nov 8 00:20:05.749191 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:20:05.801514 tar[1589]: linux-amd64/LICENSE Nov 8 00:20:05.801514 tar[1589]: linux-amd64/helm Nov 8 00:20:05.772666 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:20:05.784590 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:20:05.799345 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:20:05.799456 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:20:05.799504 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:20:05.799959 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:20:05.800044 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 8 00:20:05.800060 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:20:05.804414 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:20:05.810291 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:20:05.902491 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1611) Nov 8 00:20:05.970313 bash[1637]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:20:05.972954 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:20:05.987712 systemd[1]: Starting sshkeys.service... Nov 8 00:20:06.016655 systemd-logind[1566]: New seat seat0. Nov 8 00:20:06.028033 systemd-logind[1566]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:20:06.028059 systemd-logind[1566]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:20:06.028338 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:20:06.076662 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:20:06.089920 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:20:06.176288 coreos-metadata[1648]: Nov 08 00:20:06.176 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 8 00:20:06.190028 coreos-metadata[1648]: Nov 08 00:20:06.188 INFO Fetch successful Nov 8 00:20:06.197333 sshd_keygen[1582]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:20:06.211825 unknown[1648]: wrote ssh authorized keys file for user: core Nov 8 00:20:06.218210 locksmithd[1620]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:20:06.247648 update-ssh-keys[1660]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:20:06.250641 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:20:06.263382 systemd[1]: Finished sshkeys.service. Nov 8 00:20:06.309681 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:20:06.324985 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:20:06.330827 containerd[1591]: time="2025-11-08T00:20:06.330720589Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:20:06.356333 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:20:06.357706 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:20:06.371128 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:20:06.415484 containerd[1591]: time="2025-11-08T00:20:06.411966274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:06.424757 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:20:06.435720 containerd[1591]: time="2025-11-08T00:20:06.435658517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:20:06.436663 containerd[1591]: time="2025-11-08T00:20:06.436626765Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:20:06.436791 containerd[1591]: time="2025-11-08T00:20:06.436773811Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:20:06.437491 containerd[1591]: time="2025-11-08T00:20:06.437086527Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:20:06.437719 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:20:06.438621 containerd[1591]: time="2025-11-08T00:20:06.438579886Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:06.441576 containerd[1591]: time="2025-11-08T00:20:06.441303404Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:20:06.441576 containerd[1591]: time="2025-11-08T00:20:06.441338031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:06.443195 containerd[1591]: time="2025-11-08T00:20:06.443158107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:20:06.443540 containerd[1591]: time="2025-11-08T00:20:06.443509674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:06.445484 containerd[1591]: time="2025-11-08T00:20:06.445029810Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:20:06.445484 containerd[1591]: time="2025-11-08T00:20:06.445057158Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:06.445484 containerd[1591]: time="2025-11-08T00:20:06.445174489Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:06.445484 containerd[1591]: time="2025-11-08T00:20:06.445409680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:06.446554 containerd[1591]: time="2025-11-08T00:20:06.446530533Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:20:06.446634 containerd[1591]: time="2025-11-08T00:20:06.446623036Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:20:06.446784 containerd[1591]: time="2025-11-08T00:20:06.446771068Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:20:06.446893 containerd[1591]: time="2025-11-08T00:20:06.446880290Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:20:06.453515 containerd[1591]: time="2025-11-08T00:20:06.453445851Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:20:06.453696 containerd[1591]: time="2025-11-08T00:20:06.453680353Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:20:06.453757 containerd[1591]: time="2025-11-08T00:20:06.453746835Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:20:06.453805 containerd[1591]: time="2025-11-08T00:20:06.453796080Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:20:06.453837 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:20:06.453969 containerd[1591]: time="2025-11-08T00:20:06.453954151Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:20:06.454191 containerd[1591]: time="2025-11-08T00:20:06.454174518Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:20:06.454564 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:20:06.454807 containerd[1591]: time="2025-11-08T00:20:06.454787245Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:20:06.454983 containerd[1591]: time="2025-11-08T00:20:06.454967559Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:20:06.455066 containerd[1591]: time="2025-11-08T00:20:06.455049708Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:20:06.455175 containerd[1591]: time="2025-11-08T00:20:06.455158017Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:20:06.455227 containerd[1591]: time="2025-11-08T00:20:06.455217577Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:20:06.455272 containerd[1591]: time="2025-11-08T00:20:06.455263582Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:20:06.455315 containerd[1591]: time="2025-11-08T00:20:06.455306639Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:20:06.455374 containerd[1591]: time="2025-11-08T00:20:06.455363746Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:20:06.455421 containerd[1591]: time="2025-11-08T00:20:06.455412410Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:20:06.455476 containerd[1591]: time="2025-11-08T00:20:06.455455512Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:20:06.455524 containerd[1591]: time="2025-11-08T00:20:06.455515657Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:20:06.455566 containerd[1591]: time="2025-11-08T00:20:06.455558314Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:20:06.455620 containerd[1591]: time="2025-11-08T00:20:06.455610670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:20:06.455689 containerd[1591]: time="2025-11-08T00:20:06.455676589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:20:06.455736 containerd[1591]: time="2025-11-08T00:20:06.455727686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:20:06.455791 containerd[1591]: time="2025-11-08T00:20:06.455781684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:20:06.455836 containerd[1591]: time="2025-11-08T00:20:06.455827736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:20:06.456489 containerd[1591]: time="2025-11-08T00:20:06.455884462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:20:06.456489 containerd[1591]: time="2025-11-08T00:20:06.455901287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:20:06.456489 containerd[1591]: time="2025-11-08T00:20:06.455914607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:20:06.456489 containerd[1591]: time="2025-11-08T00:20:06.455927113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:20:06.456489 containerd[1591]: time="2025-11-08T00:20:06.455941862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:20:06.456489 containerd[1591]: time="2025-11-08T00:20:06.455954811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:20:06.456489 containerd[1591]: time="2025-11-08T00:20:06.455969323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:20:06.456489 containerd[1591]: time="2025-11-08T00:20:06.455982677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:20:06.456489 containerd[1591]: time="2025-11-08T00:20:06.455999215Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:20:06.456489 containerd[1591]: time="2025-11-08T00:20:06.456025830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:20:06.456489 containerd[1591]: time="2025-11-08T00:20:06.456045714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:20:06.456489 containerd[1591]: time="2025-11-08T00:20:06.456061349Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:20:06.456489 containerd[1591]: time="2025-11-08T00:20:06.456111265Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:20:06.456489 containerd[1591]: time="2025-11-08T00:20:06.456135184Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:20:06.456833 containerd[1591]: time="2025-11-08T00:20:06.456150714Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:20:06.456833 containerd[1591]: time="2025-11-08T00:20:06.456166576Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:20:06.456833 containerd[1591]: time="2025-11-08T00:20:06.456181106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:20:06.456833 containerd[1591]: time="2025-11-08T00:20:06.456209118Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:20:06.456833 containerd[1591]: time="2025-11-08T00:20:06.456229649Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:20:06.456833 containerd[1591]: time="2025-11-08T00:20:06.456243691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:20:06.457088 containerd[1591]: time="2025-11-08T00:20:06.457035326Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:20:06.457394 containerd[1591]: time="2025-11-08T00:20:06.457365441Z" level=info msg="Connect containerd service" Nov 8 00:20:06.457557 containerd[1591]: time="2025-11-08T00:20:06.457536820Z" level=info msg="using legacy CRI server" Nov 8 00:20:06.457611 containerd[1591]: time="2025-11-08T00:20:06.457601896Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:20:06.457785 containerd[1591]: time="2025-11-08T00:20:06.457764387Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:20:06.458621 containerd[1591]: time="2025-11-08T00:20:06.458581474Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:20:06.459443 containerd[1591]: time="2025-11-08T00:20:06.459418588Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:20:06.460691 containerd[1591]: time="2025-11-08T00:20:06.459601453Z" level=info msg="Start subscribing containerd event" Nov 8 00:20:06.460829 containerd[1591]: time="2025-11-08T00:20:06.460765014Z" level=info msg="Start recovering state" Nov 8 00:20:06.461144 containerd[1591]: time="2025-11-08T00:20:06.460887967Z" level=info msg="Start event monitor" Nov 8 00:20:06.461144 containerd[1591]: time="2025-11-08T00:20:06.460901134Z" level=info msg="Start snapshots syncer" Nov 8 00:20:06.461144 containerd[1591]: time="2025-11-08T00:20:06.460924690Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:20:06.461144 containerd[1591]: time="2025-11-08T00:20:06.460933796Z" level=info msg="Start streaming server" Nov 8 00:20:06.461308 containerd[1591]: time="2025-11-08T00:20:06.461293621Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:20:06.462488 containerd[1591]: time="2025-11-08T00:20:06.461516041Z" level=info msg="containerd successfully booted in 0.134938s" Nov 8 00:20:06.462207 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:20:06.831846 tar[1589]: linux-amd64/README.md Nov 8 00:20:06.851075 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:20:07.444707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:07.447066 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:20:07.449833 systemd[1]: Startup finished in 9.372s (kernel) + 7.599s (userspace) = 16.972s. Nov 8 00:20:07.455143 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:20:08.137902 kubelet[1701]: E1108 00:20:08.137823 1701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:20:08.140072 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:20:08.140269 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:20:14.982811 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:20:14.987812 systemd[1]: Started sshd@0-64.23.144.43:22-139.178.68.195:35308.service - OpenSSH per-connection server daemon (139.178.68.195:35308). Nov 8 00:20:15.067709 sshd[1713]: Accepted publickey for core from 139.178.68.195 port 35308 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:20:15.070385 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:15.082007 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:20:15.082518 systemd-logind[1566]: New session 1 of user core. Nov 8 00:20:15.088740 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:20:15.103881 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:20:15.117047 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:20:15.120310 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:20:15.240056 systemd[1719]: Queued start job for default target default.target. Nov 8 00:20:15.240533 systemd[1719]: Created slice app.slice - User Application Slice. Nov 8 00:20:15.240556 systemd[1719]: Reached target paths.target - Paths. Nov 8 00:20:15.240571 systemd[1719]: Reached target timers.target - Timers. Nov 8 00:20:15.250662 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:20:15.259160 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:20:15.259234 systemd[1719]: Reached target sockets.target - Sockets. Nov 8 00:20:15.259250 systemd[1719]: Reached target basic.target - Basic System. Nov 8 00:20:15.259300 systemd[1719]: Reached target default.target - Main User Target. Nov 8 00:20:15.259338 systemd[1719]: Startup finished in 131ms. Nov 8 00:20:15.259696 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:20:15.268953 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:20:15.338342 systemd[1]: Started sshd@1-64.23.144.43:22-139.178.68.195:35322.service - OpenSSH per-connection server daemon (139.178.68.195:35322). Nov 8 00:20:15.389713 sshd[1731]: Accepted publickey for core from 139.178.68.195 port 35322 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:20:15.392033 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:15.399391 systemd-logind[1566]: New session 2 of user core. Nov 8 00:20:15.405045 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:20:15.471341 sshd[1731]: pam_unix(sshd:session): session closed for user core Nov 8 00:20:15.474954 systemd[1]: sshd@1-64.23.144.43:22-139.178.68.195:35322.service: Deactivated successfully. Nov 8 00:20:15.478695 systemd-logind[1566]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:20:15.496931 systemd[1]: Started sshd@2-64.23.144.43:22-139.178.68.195:35334.service - OpenSSH per-connection server daemon (139.178.68.195:35334). Nov 8 00:20:15.497373 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:20:15.498882 systemd-logind[1566]: Removed session 2. Nov 8 00:20:15.541984 sshd[1739]: Accepted publickey for core from 139.178.68.195 port 35334 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:20:15.544055 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:15.550928 systemd-logind[1566]: New session 3 of user core. Nov 8 00:20:15.556942 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:20:15.616708 sshd[1739]: pam_unix(sshd:session): session closed for user core Nov 8 00:20:15.620305 systemd[1]: sshd@2-64.23.144.43:22-139.178.68.195:35334.service: Deactivated successfully. Nov 8 00:20:15.625288 systemd-logind[1566]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:20:15.625810 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:20:15.630816 systemd[1]: Started sshd@3-64.23.144.43:22-139.178.68.195:35348.service - OpenSSH per-connection server daemon (139.178.68.195:35348). Nov 8 00:20:15.632818 systemd-logind[1566]: Removed session 3. Nov 8 00:20:15.687744 sshd[1747]: Accepted publickey for core from 139.178.68.195 port 35348 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:20:15.689829 sshd[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:15.695727 systemd-logind[1566]: New session 4 of user core. Nov 8 00:20:15.701962 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:20:15.769582 sshd[1747]: pam_unix(sshd:session): session closed for user core Nov 8 00:20:15.780175 systemd[1]: Started sshd@4-64.23.144.43:22-139.178.68.195:35364.service - OpenSSH per-connection server daemon (139.178.68.195:35364). Nov 8 00:20:15.781203 systemd[1]: sshd@3-64.23.144.43:22-139.178.68.195:35348.service: Deactivated successfully. Nov 8 00:20:15.784614 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:20:15.785540 systemd-logind[1566]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:20:15.788850 systemd-logind[1566]: Removed session 4. Nov 8 00:20:15.821852 sshd[1753]: Accepted publickey for core from 139.178.68.195 port 35364 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:20:15.824367 sshd[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:15.830717 systemd-logind[1566]: New session 5 of user core. Nov 8 00:20:15.836921 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:20:15.910356 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:20:15.911140 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:20:15.924547 sudo[1759]: pam_unix(sudo:session): session closed for user root Nov 8 00:20:15.929883 sshd[1753]: pam_unix(sshd:session): session closed for user core Nov 8 00:20:15.939813 systemd[1]: Started sshd@5-64.23.144.43:22-139.178.68.195:35366.service - OpenSSH per-connection server daemon (139.178.68.195:35366). Nov 8 00:20:15.940581 systemd[1]: sshd@4-64.23.144.43:22-139.178.68.195:35364.service: Deactivated successfully. Nov 8 00:20:15.943368 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:20:15.945597 systemd-logind[1566]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:20:15.947124 systemd-logind[1566]: Removed session 5. Nov 8 00:20:15.990048 sshd[1762]: Accepted publickey for core from 139.178.68.195 port 35366 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:20:15.991839 sshd[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:15.998486 systemd-logind[1566]: New session 6 of user core. Nov 8 00:20:16.003879 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:20:16.066420 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:20:16.067326 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:20:16.072828 sudo[1769]: pam_unix(sudo:session): session closed for user root Nov 8 00:20:16.081600 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:20:16.082075 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:20:16.099859 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:20:16.104176 auditctl[1772]: No rules Nov 8 00:20:16.104877 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:20:16.105223 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:20:16.116475 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:20:16.146923 augenrules[1791]: No rules Nov 8 00:20:16.148756 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:20:16.150956 sudo[1768]: pam_unix(sudo:session): session closed for user root Nov 8 00:20:16.157424 sshd[1762]: pam_unix(sshd:session): session closed for user core Nov 8 00:20:16.167875 systemd[1]: Started sshd@6-64.23.144.43:22-139.178.68.195:35372.service - OpenSSH per-connection server daemon (139.178.68.195:35372). Nov 8 00:20:16.168384 systemd[1]: sshd@5-64.23.144.43:22-139.178.68.195:35366.service: Deactivated successfully. Nov 8 00:20:16.171245 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:20:16.172342 systemd-logind[1566]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:20:16.175164 systemd-logind[1566]: Removed session 6. Nov 8 00:20:16.214930 sshd[1798]: Accepted publickey for core from 139.178.68.195 port 35372 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:20:16.216855 sshd[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:16.221542 systemd-logind[1566]: New session 7 of user core. Nov 8 00:20:16.230853 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:20:16.294216 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:20:16.294751 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:20:16.766230 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:20:16.784731 (dockerd)[1819]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:20:17.293010 dockerd[1819]: time="2025-11-08T00:20:17.292917333Z" level=info msg="Starting up" Nov 8 00:20:17.546145 dockerd[1819]: time="2025-11-08T00:20:17.545780377Z" level=info msg="Loading containers: start." Nov 8 00:20:17.667520 kernel: Initializing XFRM netlink socket Nov 8 00:20:17.767842 systemd-networkd[1222]: docker0: Link UP Nov 8 00:20:17.786075 dockerd[1819]: time="2025-11-08T00:20:17.785586037Z" level=info msg="Loading containers: done." Nov 8 00:20:17.806782 dockerd[1819]: time="2025-11-08T00:20:17.806661808Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:20:17.806931 dockerd[1819]: time="2025-11-08T00:20:17.806840732Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:20:17.807151 dockerd[1819]: time="2025-11-08T00:20:17.806983135Z" level=info msg="Daemon has completed initialization" Nov 8 00:20:17.843807 dockerd[1819]: time="2025-11-08T00:20:17.843659959Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:20:17.844153 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:20:18.153397 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:20:18.159774 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:18.322106 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:18.334089 (kubelet)[1978]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:20:18.409521 kubelet[1978]: E1108 00:20:18.408425 1978 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:20:18.416171 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:20:18.416369 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:20:18.862737 containerd[1591]: time="2025-11-08T00:20:18.861910344Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 00:20:19.453862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3715619363.mount: Deactivated successfully. Nov 8 00:20:20.795935 containerd[1591]: time="2025-11-08T00:20:20.795871554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:20.797417 containerd[1591]: time="2025-11-08T00:20:20.797377609Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 8 00:20:20.798336 containerd[1591]: time="2025-11-08T00:20:20.798300751Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:20.801176 containerd[1591]: time="2025-11-08T00:20:20.801146220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:20.803408 containerd[1591]: time="2025-11-08T00:20:20.803374792Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.941414731s" Nov 8 00:20:20.803469 containerd[1591]: time="2025-11-08T00:20:20.803419721Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 8 00:20:20.804084 containerd[1591]: time="2025-11-08T00:20:20.803966779Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 00:20:22.216533 containerd[1591]: time="2025-11-08T00:20:22.216453180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:22.218251 containerd[1591]: time="2025-11-08T00:20:22.218175303Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 8 00:20:22.219091 containerd[1591]: time="2025-11-08T00:20:22.219055509Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:22.222286 containerd[1591]: time="2025-11-08T00:20:22.221718058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:22.222974 containerd[1591]: time="2025-11-08T00:20:22.222939283Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.418754544s" Nov 8 00:20:22.223047 containerd[1591]: time="2025-11-08T00:20:22.222976553Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 8 00:20:22.223565 containerd[1591]: time="2025-11-08T00:20:22.223526567Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 00:20:23.355224 containerd[1591]: time="2025-11-08T00:20:23.355157823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:23.357511 containerd[1591]: time="2025-11-08T00:20:23.357408230Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 8 00:20:23.359491 containerd[1591]: time="2025-11-08T00:20:23.358397241Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:23.361983 containerd[1591]: time="2025-11-08T00:20:23.361948343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:23.364116 containerd[1591]: time="2025-11-08T00:20:23.364079916Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.139764708s" Nov 8 00:20:23.364272 containerd[1591]: time="2025-11-08T00:20:23.364255617Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 8 00:20:23.364814 containerd[1591]: time="2025-11-08T00:20:23.364789872Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 00:20:24.612312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount801297262.mount: Deactivated successfully. Nov 8 00:20:25.216364 containerd[1591]: time="2025-11-08T00:20:25.216299968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:25.217274 containerd[1591]: time="2025-11-08T00:20:25.217120551Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 8 00:20:25.217931 containerd[1591]: time="2025-11-08T00:20:25.217765405Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:25.219852 containerd[1591]: time="2025-11-08T00:20:25.219815328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:25.220504 containerd[1591]: time="2025-11-08T00:20:25.220444381Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.855620649s" Nov 8 00:20:25.220567 containerd[1591]: time="2025-11-08T00:20:25.220515861Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 8 00:20:25.221308 containerd[1591]: time="2025-11-08T00:20:25.221277272Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 00:20:25.222843 systemd-resolved[1484]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 8 00:20:25.781438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount286322458.mount: Deactivated successfully. Nov 8 00:20:26.584446 containerd[1591]: time="2025-11-08T00:20:26.584382216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:26.586056 containerd[1591]: time="2025-11-08T00:20:26.585113521Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 8 00:20:26.586303 containerd[1591]: time="2025-11-08T00:20:26.586268606Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:26.589567 containerd[1591]: time="2025-11-08T00:20:26.589526283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:26.591400 containerd[1591]: time="2025-11-08T00:20:26.591361982Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.370051154s" Nov 8 00:20:26.591554 containerd[1591]: time="2025-11-08T00:20:26.591537572Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 8 00:20:26.592124 containerd[1591]: time="2025-11-08T00:20:26.592073532Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:20:27.106978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1239897497.mount: Deactivated successfully. Nov 8 00:20:27.113165 containerd[1591]: time="2025-11-08T00:20:27.113066681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:27.115615 containerd[1591]: time="2025-11-08T00:20:27.115492900Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 8 00:20:27.116339 containerd[1591]: time="2025-11-08T00:20:27.116272972Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:27.121124 containerd[1591]: time="2025-11-08T00:20:27.121029305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:27.123301 containerd[1591]: time="2025-11-08T00:20:27.122619685Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 530.389457ms" Nov 8 00:20:27.123301 containerd[1591]: time="2025-11-08T00:20:27.122687713Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:20:27.123567 containerd[1591]: time="2025-11-08T00:20:27.123419941Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 00:20:27.595155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2017123574.mount: Deactivated successfully. Nov 8 00:20:28.304714 systemd-resolved[1484]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 8 00:20:28.654090 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:20:28.664978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:28.911681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:28.922631 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:20:29.030880 kubelet[2180]: E1108 00:20:29.029510 2180 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:20:29.033729 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:20:29.034255 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:20:29.674843 containerd[1591]: time="2025-11-08T00:20:29.674773879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:29.677130 containerd[1591]: time="2025-11-08T00:20:29.676417133Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 8 00:20:29.677130 containerd[1591]: time="2025-11-08T00:20:29.676631709Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:29.680201 containerd[1591]: time="2025-11-08T00:20:29.680150895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:29.681619 containerd[1591]: time="2025-11-08T00:20:29.681576325Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.55811477s" Nov 8 00:20:29.681798 containerd[1591]: time="2025-11-08T00:20:29.681775059Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 8 00:20:31.878384 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:31.885806 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:31.924168 systemd[1]: Reloading requested from client PID 2218 ('systemctl') (unit session-7.scope)... Nov 8 00:20:31.924198 systemd[1]: Reloading... Nov 8 00:20:32.060495 zram_generator::config[2258]: No configuration found. Nov 8 00:20:32.220750 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:20:32.304053 systemd[1]: Reloading finished in 379 ms. Nov 8 00:20:32.349701 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:20:32.349804 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:20:32.350212 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:32.355816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:32.512701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:32.517934 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:20:32.584032 kubelet[2320]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:20:32.584032 kubelet[2320]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:20:32.584032 kubelet[2320]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:20:32.584667 kubelet[2320]: I1108 00:20:32.584089 2320 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:20:32.880876 kubelet[2320]: I1108 00:20:32.880757 2320 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:20:32.880876 kubelet[2320]: I1108 00:20:32.880798 2320 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:20:32.881381 kubelet[2320]: I1108 00:20:32.881089 2320 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:20:32.910181 kubelet[2320]: I1108 00:20:32.909628 2320 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:20:32.914695 kubelet[2320]: E1108 00:20:32.914651 2320 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.23.144.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.144.43:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:20:32.921764 kubelet[2320]: E1108 00:20:32.921714 2320 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:20:32.921764 kubelet[2320]: I1108 00:20:32.921760 2320 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:20:32.924993 kubelet[2320]: I1108 00:20:32.924964 2320 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:20:32.927495 kubelet[2320]: I1108 00:20:32.927012 2320 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:20:32.927495 kubelet[2320]: I1108 00:20:32.927071 2320 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-01b3a4b0a8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:20:32.927495 kubelet[2320]: I1108 00:20:32.927280 2320 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:20:32.927495 kubelet[2320]: I1108 00:20:32.927291 2320 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:20:32.930232 kubelet[2320]: I1108 00:20:32.930102 2320 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:20:32.933794 kubelet[2320]: I1108 00:20:32.933762 2320 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:20:32.933891 kubelet[2320]: I1108 00:20:32.933808 2320 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:20:32.933891 kubelet[2320]: I1108 00:20:32.933837 2320 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:20:32.933891 kubelet[2320]: I1108 00:20:32.933860 2320 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:20:32.941960 kubelet[2320]: W1108 00:20:32.941911 2320 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.144.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-01b3a4b0a8&limit=500&resourceVersion=0": dial tcp 64.23.144.43:6443: connect: connection refused Nov 8 00:20:32.942191 kubelet[2320]: E1108 00:20:32.942172 2320 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.144.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-01b3a4b0a8&limit=500&resourceVersion=0\": dial tcp 64.23.144.43:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:20:32.942876 kubelet[2320]: W1108 00:20:32.942827 2320 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.144.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.144.43:6443: connect: connection refused Nov 8 00:20:32.943000 kubelet[2320]: E1108 00:20:32.942986 2320 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.144.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.144.43:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:20:32.943979 kubelet[2320]: I1108 00:20:32.943956 2320 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:20:32.947963 kubelet[2320]: I1108 00:20:32.947916 2320 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:20:32.949900 kubelet[2320]: W1108 00:20:32.949611 2320 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:20:32.950615 kubelet[2320]: I1108 00:20:32.950597 2320 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:20:32.951400 kubelet[2320]: I1108 00:20:32.951385 2320 server.go:1287] "Started kubelet" Nov 8 00:20:32.953868 kubelet[2320]: I1108 00:20:32.953848 2320 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:20:32.959437 kubelet[2320]: E1108 00:20:32.958017 2320 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.144.43:6443/api/v1/namespaces/default/events\": dial tcp 64.23.144.43:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-01b3a4b0a8.1875e01a58f0bfaa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-01b3a4b0a8,UID:ci-4081.3.6-n-01b3a4b0a8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-01b3a4b0a8,},FirstTimestamp:2025-11-08 00:20:32.95136145 +0000 UTC m=+0.428037319,LastTimestamp:2025-11-08 00:20:32.95136145 +0000 UTC m=+0.428037319,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-01b3a4b0a8,}" Nov 8 00:20:32.960490 kubelet[2320]: I1108 00:20:32.959966 2320 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:20:32.961012 kubelet[2320]: I1108 00:20:32.960996 2320 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:20:32.962936 kubelet[2320]: I1108 00:20:32.962871 2320 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:20:32.963735 kubelet[2320]: I1108 00:20:32.963720 2320 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:20:32.965047 kubelet[2320]: I1108 00:20:32.965029 2320 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:20:32.966638 kubelet[2320]: I1108 00:20:32.966618 2320 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:20:32.966977 kubelet[2320]: E1108 00:20:32.966958 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-01b3a4b0a8\" not found" Nov 8 00:20:32.970260 kubelet[2320]: I1108 00:20:32.970226 2320 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:20:32.971143 kubelet[2320]: I1108 00:20:32.970345 2320 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:20:32.971143 kubelet[2320]: E1108 00:20:32.970615 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.144.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-01b3a4b0a8?timeout=10s\": dial tcp 64.23.144.43:6443: connect: connection refused" interval="200ms" Nov 8 00:20:32.971143 kubelet[2320]: I1108 00:20:32.970757 2320 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:20:32.971143 kubelet[2320]: I1108 00:20:32.970829 2320 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:20:32.976771 kubelet[2320]: I1108 00:20:32.975680 2320 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:20:32.978435 kubelet[2320]: I1108 00:20:32.978359 2320 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:20:32.980570 kubelet[2320]: I1108 00:20:32.979586 2320 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:20:32.980570 kubelet[2320]: I1108 00:20:32.979608 2320 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:20:32.980570 kubelet[2320]: I1108 00:20:32.979629 2320 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:20:32.980570 kubelet[2320]: I1108 00:20:32.979636 2320 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:20:32.980570 kubelet[2320]: E1108 00:20:32.979687 2320 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:20:32.987142 kubelet[2320]: W1108 00:20:32.987089 2320 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.144.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.144.43:6443: connect: connection refused Nov 8 00:20:32.987528 kubelet[2320]: E1108 00:20:32.987500 2320 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.144.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.144.43:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:20:33.007739 kubelet[2320]: W1108 00:20:33.007679 2320 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.144.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.144.43:6443: connect: connection refused Nov 8 00:20:33.008061 kubelet[2320]: E1108 00:20:33.008027 2320 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.144.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.144.43:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:20:33.011126 kubelet[2320]: E1108 00:20:33.011097 2320 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:20:33.015853 kubelet[2320]: I1108 00:20:33.015826 2320 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:20:33.015853 kubelet[2320]: I1108 00:20:33.015843 2320 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:20:33.015853 kubelet[2320]: I1108 00:20:33.015863 2320 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:20:33.018276 kubelet[2320]: I1108 00:20:33.018190 2320 policy_none.go:49] "None policy: Start" Nov 8 00:20:33.018276 kubelet[2320]: I1108 00:20:33.018230 2320 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:20:33.018488 kubelet[2320]: I1108 00:20:33.018333 2320 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:20:33.023726 kubelet[2320]: I1108 00:20:33.023667 2320 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:20:33.023970 kubelet[2320]: I1108 00:20:33.023939 2320 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:20:33.024033 kubelet[2320]: I1108 00:20:33.023961 2320 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:20:33.025366 kubelet[2320]: I1108 00:20:33.025316 2320 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:20:33.029306 kubelet[2320]: E1108 00:20:33.029273 2320 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:20:33.029488 kubelet[2320]: E1108 00:20:33.029324 2320 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-01b3a4b0a8\" not found" Nov 8 00:20:33.085526 kubelet[2320]: E1108 00:20:33.084481 2320 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-01b3a4b0a8\" not found" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:33.086257 kubelet[2320]: E1108 00:20:33.086206 2320 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-01b3a4b0a8\" not found" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:33.088642 kubelet[2320]: E1108 00:20:33.088621 2320 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-01b3a4b0a8\" not found" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:33.126140 kubelet[2320]: I1108 00:20:33.126081 2320 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:33.126669 kubelet[2320]: E1108 00:20:33.126640 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.144.43:6443/api/v1/nodes\": dial tcp 64.23.144.43:6443: connect: connection refused" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:33.171553 kubelet[2320]: E1108 00:20:33.171494 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.144.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-01b3a4b0a8?timeout=10s\": dial tcp 64.23.144.43:6443: connect: connection refused" interval="400ms" Nov 8 00:20:33.172893 kubelet[2320]: I1108 00:20:33.172543 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2290e7127c8f2514d4604a5a0075fb0d-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-01b3a4b0a8\" (UID: \"2290e7127c8f2514d4604a5a0075fb0d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:33.172893 kubelet[2320]: I1108 00:20:33.172587 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/579cb466d2e62fd1c2ad715a38e1bd85-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8\" (UID: \"579cb466d2e62fd1c2ad715a38e1bd85\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:33.172893 kubelet[2320]: I1108 00:20:33.172608 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/579cb466d2e62fd1c2ad715a38e1bd85-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8\" (UID: \"579cb466d2e62fd1c2ad715a38e1bd85\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:33.172893 kubelet[2320]: I1108 00:20:33.172625 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38d63f0da6a22e5c8952b074b90beada-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-01b3a4b0a8\" (UID: \"38d63f0da6a22e5c8952b074b90beada\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:33.172893 kubelet[2320]: I1108 00:20:33.172648 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2290e7127c8f2514d4604a5a0075fb0d-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-01b3a4b0a8\" (UID: \"2290e7127c8f2514d4604a5a0075fb0d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:33.173107 kubelet[2320]: I1108 00:20:33.172704 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/579cb466d2e62fd1c2ad715a38e1bd85-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8\" (UID: \"579cb466d2e62fd1c2ad715a38e1bd85\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:33.173107 kubelet[2320]: I1108 00:20:33.172763 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/579cb466d2e62fd1c2ad715a38e1bd85-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8\" (UID: \"579cb466d2e62fd1c2ad715a38e1bd85\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:33.173107 kubelet[2320]: I1108 00:20:33.172811 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/579cb466d2e62fd1c2ad715a38e1bd85-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8\" (UID: \"579cb466d2e62fd1c2ad715a38e1bd85\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:33.173107 kubelet[2320]: I1108 00:20:33.172846 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2290e7127c8f2514d4604a5a0075fb0d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-01b3a4b0a8\" (UID: \"2290e7127c8f2514d4604a5a0075fb0d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:33.329423 kubelet[2320]: I1108 00:20:33.329081 2320 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:33.329943 kubelet[2320]: E1108 00:20:33.329910 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.144.43:6443/api/v1/nodes\": dial tcp 64.23.144.43:6443: connect: connection refused" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:33.386020 kubelet[2320]: E1108 00:20:33.385551 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:33.386572 containerd[1591]: time="2025-11-08T00:20:33.386537498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-01b3a4b0a8,Uid:2290e7127c8f2514d4604a5a0075fb0d,Namespace:kube-system,Attempt:0,}" Nov 8 00:20:33.389381 systemd-resolved[1484]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Nov 8 00:20:33.390344 kubelet[2320]: E1108 00:20:33.389411 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:33.390344 kubelet[2320]: E1108 00:20:33.390031 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:33.394787 containerd[1591]: time="2025-11-08T00:20:33.394415173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-01b3a4b0a8,Uid:38d63f0da6a22e5c8952b074b90beada,Namespace:kube-system,Attempt:0,}" Nov 8 00:20:33.394787 containerd[1591]: time="2025-11-08T00:20:33.394416959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8,Uid:579cb466d2e62fd1c2ad715a38e1bd85,Namespace:kube-system,Attempt:0,}" Nov 8 00:20:33.573361 kubelet[2320]: E1108 00:20:33.573211 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.144.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-01b3a4b0a8?timeout=10s\": dial tcp 64.23.144.43:6443: connect: connection refused" interval="800ms" Nov 8 00:20:33.732312 kubelet[2320]: I1108 00:20:33.731762 2320 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:33.732312 kubelet[2320]: E1108 00:20:33.732240 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.144.43:6443/api/v1/nodes\": dial tcp 64.23.144.43:6443: connect: connection refused" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:33.824790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4225361597.mount: Deactivated successfully. Nov 8 00:20:33.830447 containerd[1591]: time="2025-11-08T00:20:33.830368742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:20:33.831277 containerd[1591]: time="2025-11-08T00:20:33.831249823Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:20:33.832227 containerd[1591]: time="2025-11-08T00:20:33.832181651Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:20:33.834486 containerd[1591]: time="2025-11-08T00:20:33.833223705Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:20:33.834486 containerd[1591]: time="2025-11-08T00:20:33.833766635Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:20:33.834486 containerd[1591]: time="2025-11-08T00:20:33.834316051Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:20:33.836069 containerd[1591]: time="2025-11-08T00:20:33.836028700Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:20:33.838601 containerd[1591]: time="2025-11-08T00:20:33.838558857Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 444.038336ms" Nov 8 00:20:33.839994 containerd[1591]: time="2025-11-08T00:20:33.839944731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:20:33.842261 containerd[1591]: time="2025-11-08T00:20:33.842207691Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 447.369441ms" Nov 8 00:20:33.844061 containerd[1591]: time="2025-11-08T00:20:33.844026403Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 456.933933ms" Nov 8 00:20:33.985568 kubelet[2320]: W1108 00:20:33.985488 2320 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.144.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.144.43:6443: connect: connection refused Nov 8 00:20:33.985568 kubelet[2320]: E1108 00:20:33.985539 2320 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.144.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.144.43:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:20:33.987037 kubelet[2320]: W1108 00:20:33.986962 2320 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.144.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.144.43:6443: connect: connection refused Nov 8 00:20:33.987266 kubelet[2320]: E1108 00:20:33.987217 2320 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.144.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.144.43:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:20:34.020031 containerd[1591]: time="2025-11-08T00:20:34.018571253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:20:34.020031 containerd[1591]: time="2025-11-08T00:20:34.018647801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:20:34.020031 containerd[1591]: time="2025-11-08T00:20:34.018663314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:34.020031 containerd[1591]: time="2025-11-08T00:20:34.018786093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:34.021876 containerd[1591]: time="2025-11-08T00:20:34.021781156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:20:34.022810 containerd[1591]: time="2025-11-08T00:20:34.022724428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:20:34.022810 containerd[1591]: time="2025-11-08T00:20:34.022785509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:20:34.022982 containerd[1591]: time="2025-11-08T00:20:34.022931272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:20:34.022982 containerd[1591]: time="2025-11-08T00:20:34.022959858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:34.023317 containerd[1591]: time="2025-11-08T00:20:34.023268212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:34.023903 containerd[1591]: time="2025-11-08T00:20:34.023844664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:34.024409 containerd[1591]: time="2025-11-08T00:20:34.024238177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:34.109569 kubelet[2320]: W1108 00:20:34.109336 2320 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.144.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.144.43:6443: connect: connection refused Nov 8 00:20:34.109569 kubelet[2320]: E1108 00:20:34.109433 2320 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.144.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.144.43:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:20:34.125795 containerd[1591]: time="2025-11-08T00:20:34.125397537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-01b3a4b0a8,Uid:38d63f0da6a22e5c8952b074b90beada,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c4e79f6734d83c21ea286ee4000b7f88ad308699690224bf55cd19cd68cf9ae\"" Nov 8 00:20:34.138112 kubelet[2320]: E1108 00:20:34.137643 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:34.145563 containerd[1591]: time="2025-11-08T00:20:34.145514775Z" level=info msg="CreateContainer within sandbox \"0c4e79f6734d83c21ea286ee4000b7f88ad308699690224bf55cd19cd68cf9ae\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:20:34.156835 containerd[1591]: time="2025-11-08T00:20:34.156637197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-01b3a4b0a8,Uid:2290e7127c8f2514d4604a5a0075fb0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"714da32c91e19a68eb921efe0346928343bc3f828d80f4109cdfc24d0e8380d3\"" Nov 8 00:20:34.158382 kubelet[2320]: E1108 00:20:34.157810 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:34.161350 containerd[1591]: time="2025-11-08T00:20:34.161304030Z" level=info msg="CreateContainer within sandbox \"714da32c91e19a68eb921efe0346928343bc3f828d80f4109cdfc24d0e8380d3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:20:34.168984 containerd[1591]: time="2025-11-08T00:20:34.168919806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8,Uid:579cb466d2e62fd1c2ad715a38e1bd85,Namespace:kube-system,Attempt:0,} returns sandbox id \"925d3befbd58e8ca516d76e510b7471627b7d778c407521250e500b45702fb45\"" Nov 8 00:20:34.169643 kubelet[2320]: E1108 00:20:34.169611 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:34.170020 containerd[1591]: time="2025-11-08T00:20:34.169806753Z" level=info msg="CreateContainer within sandbox \"0c4e79f6734d83c21ea286ee4000b7f88ad308699690224bf55cd19cd68cf9ae\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a50f10e331927ca155b75228504cc57a251ead656d18c61238585d926b06a2c1\"" Nov 8 00:20:34.172457 containerd[1591]: time="2025-11-08T00:20:34.171329515Z" level=info msg="StartContainer for \"a50f10e331927ca155b75228504cc57a251ead656d18c61238585d926b06a2c1\"" Nov 8 00:20:34.173641 containerd[1591]: time="2025-11-08T00:20:34.173594011Z" level=info msg="CreateContainer within sandbox \"925d3befbd58e8ca516d76e510b7471627b7d778c407521250e500b45702fb45\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:20:34.175070 containerd[1591]: time="2025-11-08T00:20:34.175019264Z" level=info msg="CreateContainer within sandbox \"714da32c91e19a68eb921efe0346928343bc3f828d80f4109cdfc24d0e8380d3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7d3216e741daafa3fa396758dcfc78ec1436330e6b8e06bf6bd4dbf59890d0b1\"" Nov 8 00:20:34.177500 containerd[1591]: time="2025-11-08T00:20:34.176798710Z" level=info msg="StartContainer for \"7d3216e741daafa3fa396758dcfc78ec1436330e6b8e06bf6bd4dbf59890d0b1\"" Nov 8 00:20:34.187909 containerd[1591]: time="2025-11-08T00:20:34.187847904Z" level=info msg="CreateContainer within sandbox \"925d3befbd58e8ca516d76e510b7471627b7d778c407521250e500b45702fb45\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c6804d8e8918f33ac8ab8791e030af25abaaad50eac74a1a48b8f21be8241d78\"" Nov 8 00:20:34.188617 containerd[1591]: time="2025-11-08T00:20:34.188590351Z" level=info msg="StartContainer for \"c6804d8e8918f33ac8ab8791e030af25abaaad50eac74a1a48b8f21be8241d78\"" Nov 8 00:20:34.276172 kubelet[2320]: W1108 00:20:34.276098 2320 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.144.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-01b3a4b0a8&limit=500&resourceVersion=0": dial tcp 64.23.144.43:6443: connect: connection refused Nov 8 00:20:34.276397 kubelet[2320]: E1108 00:20:34.276185 2320 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.144.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-01b3a4b0a8&limit=500&resourceVersion=0\": dial tcp 64.23.144.43:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:20:34.283401 containerd[1591]: time="2025-11-08T00:20:34.282722861Z" level=info msg="StartContainer for \"7d3216e741daafa3fa396758dcfc78ec1436330e6b8e06bf6bd4dbf59890d0b1\" returns successfully" Nov 8 00:20:34.310696 containerd[1591]: time="2025-11-08T00:20:34.310635881Z" level=info msg="StartContainer for \"c6804d8e8918f33ac8ab8791e030af25abaaad50eac74a1a48b8f21be8241d78\" returns successfully" Nov 8 00:20:34.332554 containerd[1591]: time="2025-11-08T00:20:34.332494063Z" level=info msg="StartContainer for \"a50f10e331927ca155b75228504cc57a251ead656d18c61238585d926b06a2c1\" returns successfully" Nov 8 00:20:34.373919 kubelet[2320]: E1108 00:20:34.373791 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.144.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-01b3a4b0a8?timeout=10s\": dial tcp 64.23.144.43:6443: connect: connection refused" interval="1.6s" Nov 8 00:20:34.535345 kubelet[2320]: I1108 00:20:34.535309 2320 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:34.538490 kubelet[2320]: E1108 00:20:34.537740 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.144.43:6443/api/v1/nodes\": dial tcp 64.23.144.43:6443: connect: connection refused" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:35.024780 kubelet[2320]: E1108 00:20:35.024742 2320 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-01b3a4b0a8\" not found" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:35.025329 kubelet[2320]: E1108 00:20:35.024893 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:35.030315 kubelet[2320]: E1108 00:20:35.030275 2320 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-01b3a4b0a8\" not found" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:35.030525 kubelet[2320]: E1108 00:20:35.030508 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:35.037494 kubelet[2320]: E1108 00:20:35.036827 2320 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-01b3a4b0a8\" not found" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:35.037494 kubelet[2320]: E1108 00:20:35.036963 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:36.042914 kubelet[2320]: E1108 00:20:36.042863 2320 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-01b3a4b0a8\" not found" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:36.043371 kubelet[2320]: E1108 00:20:36.043107 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:36.044511 kubelet[2320]: E1108 00:20:36.043628 2320 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-01b3a4b0a8\" not found" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:36.044511 kubelet[2320]: E1108 00:20:36.043750 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:36.140523 kubelet[2320]: I1108 00:20:36.140490 2320 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:36.336831 kubelet[2320]: E1108 00:20:36.336697 2320 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-01b3a4b0a8\" not found" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:36.527781 kubelet[2320]: I1108 00:20:36.527734 2320 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:36.527781 kubelet[2320]: E1108 00:20:36.527780 2320 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-01b3a4b0a8\": node \"ci-4081.3.6-n-01b3a4b0a8\" not found" Nov 8 00:20:36.539340 kubelet[2320]: E1108 00:20:36.539289 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-01b3a4b0a8\" not found" Nov 8 00:20:36.668156 kubelet[2320]: I1108 00:20:36.667996 2320 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:36.679222 kubelet[2320]: E1108 00:20:36.677515 2320 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-01b3a4b0a8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:36.679222 kubelet[2320]: I1108 00:20:36.677563 2320 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:36.680659 kubelet[2320]: E1108 00:20:36.680323 2320 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-01b3a4b0a8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:36.680659 kubelet[2320]: I1108 00:20:36.680389 2320 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:36.682980 kubelet[2320]: E1108 00:20:36.682907 2320 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:36.945417 kubelet[2320]: I1108 00:20:36.945166 2320 apiserver.go:52] "Watching apiserver" Nov 8 00:20:36.970959 kubelet[2320]: I1108 00:20:36.970896 2320 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:20:37.037869 kubelet[2320]: I1108 00:20:37.037811 2320 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:37.041298 kubelet[2320]: E1108 00:20:37.040939 2320 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-01b3a4b0a8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:37.041298 kubelet[2320]: E1108 00:20:37.041203 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:38.481713 systemd[1]: Reloading requested from client PID 2596 ('systemctl') (unit session-7.scope)... Nov 8 00:20:38.481740 systemd[1]: Reloading... Nov 8 00:20:38.606506 zram_generator::config[2641]: No configuration found. Nov 8 00:20:38.740051 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:20:38.837928 systemd[1]: Reloading finished in 355 ms. Nov 8 00:20:38.875959 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:38.893409 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:20:38.894036 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:38.902953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:39.062888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:39.073178 (kubelet)[2695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:20:39.184046 kubelet[2695]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:20:39.184046 kubelet[2695]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:20:39.184046 kubelet[2695]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:20:39.186499 kubelet[2695]: I1108 00:20:39.184152 2695 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:20:39.195758 kubelet[2695]: I1108 00:20:39.195672 2695 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:20:39.195758 kubelet[2695]: I1108 00:20:39.195733 2695 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:20:39.197136 kubelet[2695]: I1108 00:20:39.197097 2695 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:20:39.210078 kubelet[2695]: I1108 00:20:39.209911 2695 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 00:20:39.229509 kubelet[2695]: I1108 00:20:39.229306 2695 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:20:39.235427 kubelet[2695]: E1108 00:20:39.235050 2695 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:20:39.235427 kubelet[2695]: I1108 00:20:39.235081 2695 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:20:39.239771 kubelet[2695]: I1108 00:20:39.239729 2695 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:20:39.241457 kubelet[2695]: I1108 00:20:39.241371 2695 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:20:39.241757 kubelet[2695]: I1108 00:20:39.241444 2695 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-01b3a4b0a8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:20:39.241852 kubelet[2695]: I1108 00:20:39.241773 2695 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:20:39.241852 kubelet[2695]: I1108 00:20:39.241787 2695 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:20:39.241900 kubelet[2695]: I1108 00:20:39.241872 2695 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:20:39.242170 kubelet[2695]: I1108 00:20:39.242091 2695 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:20:39.242170 kubelet[2695]: I1108 00:20:39.242140 2695 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:20:39.242170 kubelet[2695]: I1108 00:20:39.242166 2695 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:20:39.242295 kubelet[2695]: I1108 00:20:39.242182 2695 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:20:39.249507 kubelet[2695]: I1108 00:20:39.248608 2695 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:20:39.249507 kubelet[2695]: I1108 00:20:39.249053 2695 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:20:39.249761 kubelet[2695]: I1108 00:20:39.249746 2695 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:20:39.249882 kubelet[2695]: I1108 00:20:39.249873 2695 server.go:1287] "Started kubelet" Nov 8 00:20:39.272878 kubelet[2695]: I1108 00:20:39.272809 2695 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:20:39.275186 kubelet[2695]: I1108 00:20:39.275042 2695 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:20:39.276704 kubelet[2695]: I1108 00:20:39.272991 2695 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:20:39.277015 kubelet[2695]: I1108 00:20:39.277001 2695 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:20:39.277104 kubelet[2695]: I1108 00:20:39.273521 2695 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:20:39.283773 kubelet[2695]: I1108 00:20:39.283389 2695 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:20:39.291947 kubelet[2695]: I1108 00:20:39.291916 2695 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:20:39.292095 kubelet[2695]: I1108 00:20:39.292043 2695 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:20:39.292502 kubelet[2695]: I1108 00:20:39.292170 2695 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:20:39.296415 kubelet[2695]: E1108 00:20:39.294299 2695 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:20:39.296415 kubelet[2695]: I1108 00:20:39.294588 2695 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:20:39.296415 kubelet[2695]: I1108 00:20:39.294703 2695 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:20:39.301670 kubelet[2695]: I1108 00:20:39.299840 2695 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:20:39.303756 kubelet[2695]: I1108 00:20:39.303721 2695 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:20:39.305142 kubelet[2695]: I1108 00:20:39.305119 2695 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:20:39.306622 kubelet[2695]: I1108 00:20:39.306599 2695 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:20:39.306761 kubelet[2695]: I1108 00:20:39.306751 2695 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:20:39.306985 kubelet[2695]: I1108 00:20:39.306805 2695 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:20:39.306985 kubelet[2695]: E1108 00:20:39.306921 2695 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:20:39.376565 kubelet[2695]: I1108 00:20:39.375907 2695 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:20:39.376565 kubelet[2695]: I1108 00:20:39.375936 2695 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:20:39.376565 kubelet[2695]: I1108 00:20:39.375965 2695 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:20:39.376565 kubelet[2695]: I1108 00:20:39.376204 2695 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:20:39.376565 kubelet[2695]: I1108 00:20:39.376222 2695 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:20:39.376565 kubelet[2695]: I1108 00:20:39.376267 2695 policy_none.go:49] "None policy: Start" Nov 8 00:20:39.376565 kubelet[2695]: I1108 00:20:39.376283 2695 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:20:39.376565 kubelet[2695]: I1108 00:20:39.376298 2695 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:20:39.376565 kubelet[2695]: I1108 00:20:39.376446 2695 state_mem.go:75] "Updated machine memory state" Nov 8 00:20:39.377853 kubelet[2695]: I1108 00:20:39.377825 2695 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:20:39.378387 kubelet[2695]: I1108 00:20:39.378047 2695 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:20:39.378387 kubelet[2695]: I1108 00:20:39.378066 2695 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:20:39.381001 kubelet[2695]: I1108 00:20:39.380940 2695 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:20:39.384632 kubelet[2695]: E1108 00:20:39.384603 2695 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:20:39.412145 kubelet[2695]: I1108 00:20:39.409004 2695 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:39.412145 kubelet[2695]: I1108 00:20:39.409822 2695 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:39.412145 kubelet[2695]: I1108 00:20:39.411678 2695 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:39.415005 kubelet[2695]: W1108 00:20:39.414763 2695 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 00:20:39.417120 kubelet[2695]: W1108 00:20:39.417097 2695 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 00:20:39.417924 kubelet[2695]: W1108 00:20:39.417884 2695 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 00:20:39.489611 kubelet[2695]: I1108 00:20:39.485954 2695 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:39.493232 kubelet[2695]: I1108 00:20:39.493095 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/579cb466d2e62fd1c2ad715a38e1bd85-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8\" (UID: \"579cb466d2e62fd1c2ad715a38e1bd85\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:39.495303 kubelet[2695]: I1108 00:20:39.493761 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/579cb466d2e62fd1c2ad715a38e1bd85-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8\" (UID: \"579cb466d2e62fd1c2ad715a38e1bd85\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:39.495303 kubelet[2695]: I1108 00:20:39.493806 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38d63f0da6a22e5c8952b074b90beada-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-01b3a4b0a8\" (UID: \"38d63f0da6a22e5c8952b074b90beada\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:39.495303 kubelet[2695]: I1108 00:20:39.493840 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/579cb466d2e62fd1c2ad715a38e1bd85-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8\" (UID: \"579cb466d2e62fd1c2ad715a38e1bd85\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:39.495303 kubelet[2695]: I1108 00:20:39.493885 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/579cb466d2e62fd1c2ad715a38e1bd85-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8\" (UID: \"579cb466d2e62fd1c2ad715a38e1bd85\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:39.495303 kubelet[2695]: I1108 00:20:39.493904 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/579cb466d2e62fd1c2ad715a38e1bd85-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8\" (UID: \"579cb466d2e62fd1c2ad715a38e1bd85\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:39.495662 kubelet[2695]: I1108 00:20:39.493923 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2290e7127c8f2514d4604a5a0075fb0d-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-01b3a4b0a8\" (UID: \"2290e7127c8f2514d4604a5a0075fb0d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:39.495662 kubelet[2695]: I1108 00:20:39.493939 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2290e7127c8f2514d4604a5a0075fb0d-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-01b3a4b0a8\" (UID: \"2290e7127c8f2514d4604a5a0075fb0d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:39.495662 kubelet[2695]: I1108 00:20:39.493958 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2290e7127c8f2514d4604a5a0075fb0d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-01b3a4b0a8\" (UID: \"2290e7127c8f2514d4604a5a0075fb0d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:39.506372 kubelet[2695]: I1108 00:20:39.505562 2695 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:39.506372 kubelet[2695]: I1108 00:20:39.505778 2695 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:39.717423 kubelet[2695]: E1108 00:20:39.717059 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:39.718515 kubelet[2695]: E1108 00:20:39.718245 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:39.720235 kubelet[2695]: E1108 00:20:39.718862 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:40.256064 kubelet[2695]: I1108 00:20:40.254748 2695 apiserver.go:52] "Watching apiserver" Nov 8 00:20:40.292724 kubelet[2695]: I1108 00:20:40.292622 2695 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:20:40.342188 kubelet[2695]: I1108 00:20:40.340033 2695 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:40.342188 kubelet[2695]: E1108 00:20:40.340425 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:40.342188 kubelet[2695]: E1108 00:20:40.340763 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:40.351255 kubelet[2695]: W1108 00:20:40.351113 2695 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 00:20:40.354510 kubelet[2695]: E1108 00:20:40.352563 2695 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-01b3a4b0a8\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:20:40.354897 kubelet[2695]: E1108 00:20:40.354875 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:40.402154 kubelet[2695]: I1108 00:20:40.401695 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-01b3a4b0a8" podStartSLOduration=1.401674323 podStartE2EDuration="1.401674323s" podCreationTimestamp="2025-11-08 00:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:20:40.383751744 +0000 UTC m=+1.300519977" watchObservedRunningTime="2025-11-08 00:20:40.401674323 +0000 UTC m=+1.318442562" Nov 8 00:20:40.415120 kubelet[2695]: I1108 00:20:40.415047 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-01b3a4b0a8" podStartSLOduration=1.415027938 podStartE2EDuration="1.415027938s" podCreationTimestamp="2025-11-08 00:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:20:40.40196711 +0000 UTC m=+1.318735339" watchObservedRunningTime="2025-11-08 00:20:40.415027938 +0000 UTC m=+1.331796150" Nov 8 00:20:40.433242 kubelet[2695]: I1108 00:20:40.433134 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-01b3a4b0a8" podStartSLOduration=1.4331042410000001 podStartE2EDuration="1.433104241s" podCreationTimestamp="2025-11-08 00:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:20:40.415712959 +0000 UTC m=+1.332481191" watchObservedRunningTime="2025-11-08 00:20:40.433104241 +0000 UTC m=+1.349872480" Nov 8 00:20:41.342024 kubelet[2695]: E1108 00:20:41.341896 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:41.342024 kubelet[2695]: E1108 00:20:41.341920 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:43.753166 kubelet[2695]: I1108 00:20:43.752963 2695 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:20:43.754207 containerd[1591]: time="2025-11-08T00:20:43.754149241Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:20:43.754622 kubelet[2695]: I1108 00:20:43.754422 2695 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:20:44.731987 kubelet[2695]: I1108 00:20:44.731901 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/084db580-8a04-487c-adfb-6dc7cb8365c1-kube-proxy\") pod \"kube-proxy-wmjwl\" (UID: \"084db580-8a04-487c-adfb-6dc7cb8365c1\") " pod="kube-system/kube-proxy-wmjwl" Nov 8 00:20:44.731987 kubelet[2695]: I1108 00:20:44.731954 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49594\" (UniqueName: \"kubernetes.io/projected/084db580-8a04-487c-adfb-6dc7cb8365c1-kube-api-access-49594\") pod \"kube-proxy-wmjwl\" (UID: \"084db580-8a04-487c-adfb-6dc7cb8365c1\") " pod="kube-system/kube-proxy-wmjwl" Nov 8 00:20:44.732177 kubelet[2695]: I1108 00:20:44.732058 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/084db580-8a04-487c-adfb-6dc7cb8365c1-xtables-lock\") pod \"kube-proxy-wmjwl\" (UID: \"084db580-8a04-487c-adfb-6dc7cb8365c1\") " pod="kube-system/kube-proxy-wmjwl" Nov 8 00:20:44.732177 kubelet[2695]: I1108 00:20:44.732094 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/084db580-8a04-487c-adfb-6dc7cb8365c1-lib-modules\") pod \"kube-proxy-wmjwl\" (UID: \"084db580-8a04-487c-adfb-6dc7cb8365c1\") " pod="kube-system/kube-proxy-wmjwl" Nov 8 00:20:44.833233 kubelet[2695]: I1108 00:20:44.832648 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/07299124-b0a5-4af7-ad72-280b19b3abe7-var-lib-calico\") pod \"tigera-operator-7dcd859c48-ln9gx\" (UID: \"07299124-b0a5-4af7-ad72-280b19b3abe7\") " pod="tigera-operator/tigera-operator-7dcd859c48-ln9gx" Nov 8 00:20:44.833233 kubelet[2695]: I1108 00:20:44.832714 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8rb9\" (UniqueName: \"kubernetes.io/projected/07299124-b0a5-4af7-ad72-280b19b3abe7-kube-api-access-b8rb9\") pod \"tigera-operator-7dcd859c48-ln9gx\" (UID: \"07299124-b0a5-4af7-ad72-280b19b3abe7\") " pod="tigera-operator/tigera-operator-7dcd859c48-ln9gx" Nov 8 00:20:44.976806 kubelet[2695]: E1108 00:20:44.976032 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:44.977847 containerd[1591]: time="2025-11-08T00:20:44.977419584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wmjwl,Uid:084db580-8a04-487c-adfb-6dc7cb8365c1,Namespace:kube-system,Attempt:0,}" Nov 8 00:20:45.012262 containerd[1591]: time="2025-11-08T00:20:45.008929188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:20:45.012262 containerd[1591]: time="2025-11-08T00:20:45.010735803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:20:45.012262 containerd[1591]: time="2025-11-08T00:20:45.010750345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:45.012262 containerd[1591]: time="2025-11-08T00:20:45.010872794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:45.066681 containerd[1591]: time="2025-11-08T00:20:45.066560804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wmjwl,Uid:084db580-8a04-487c-adfb-6dc7cb8365c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4e88d4c5fd5e1b7a6499e927ffffd499068ee1f9e84fd9fe743fa566b1d93ee\"" Nov 8 00:20:45.067903 kubelet[2695]: E1108 00:20:45.067871 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:45.071659 containerd[1591]: time="2025-11-08T00:20:45.071601484Z" level=info msg="CreateContainer within sandbox \"c4e88d4c5fd5e1b7a6499e927ffffd499068ee1f9e84fd9fe743fa566b1d93ee\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:20:45.088744 containerd[1591]: time="2025-11-08T00:20:45.088680914Z" level=info msg="CreateContainer within sandbox \"c4e88d4c5fd5e1b7a6499e927ffffd499068ee1f9e84fd9fe743fa566b1d93ee\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7884272d96dc66f347d987bb94f6dbd65b625720fa2f7b8d2f04f02d13953a3e\"" Nov 8 00:20:45.091553 containerd[1591]: time="2025-11-08T00:20:45.091105913Z" level=info msg="StartContainer for \"7884272d96dc66f347d987bb94f6dbd65b625720fa2f7b8d2f04f02d13953a3e\"" Nov 8 00:20:45.099365 containerd[1591]: time="2025-11-08T00:20:45.098904111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ln9gx,Uid:07299124-b0a5-4af7-ad72-280b19b3abe7,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:20:45.133141 containerd[1591]: time="2025-11-08T00:20:45.132981892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:20:45.133141 containerd[1591]: time="2025-11-08T00:20:45.133071846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:20:45.134856 containerd[1591]: time="2025-11-08T00:20:45.133623667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:45.136750 containerd[1591]: time="2025-11-08T00:20:45.136647395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:45.211018 containerd[1591]: time="2025-11-08T00:20:45.210729169Z" level=info msg="StartContainer for \"7884272d96dc66f347d987bb94f6dbd65b625720fa2f7b8d2f04f02d13953a3e\" returns successfully" Nov 8 00:20:45.247781 containerd[1591]: time="2025-11-08T00:20:45.247730485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ln9gx,Uid:07299124-b0a5-4af7-ad72-280b19b3abe7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0e3b9c10e0137ee5e6f26c696037be58d4264cec4bb8fd74173e1afa01b878aa\"" Nov 8 00:20:45.255338 containerd[1591]: time="2025-11-08T00:20:45.255135578Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:20:45.360580 kubelet[2695]: E1108 00:20:45.359879 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:45.370942 kubelet[2695]: I1108 00:20:45.370339 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wmjwl" podStartSLOduration=1.3703146689999999 podStartE2EDuration="1.370314669s" podCreationTimestamp="2025-11-08 00:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:20:45.369892668 +0000 UTC m=+6.286660900" watchObservedRunningTime="2025-11-08 00:20:45.370314669 +0000 UTC m=+6.287082903" Nov 8 00:20:46.117042 kubelet[2695]: E1108 00:20:46.116647 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:46.350390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1752109832.mount: Deactivated successfully. Nov 8 00:20:46.360801 kubelet[2695]: E1108 00:20:46.360769 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:47.376577 containerd[1591]: time="2025-11-08T00:20:47.376511024Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:47.377600 containerd[1591]: time="2025-11-08T00:20:47.377384125Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:20:47.378367 containerd[1591]: time="2025-11-08T00:20:47.378138702Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:47.380340 containerd[1591]: time="2025-11-08T00:20:47.380311376Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:47.381121 containerd[1591]: time="2025-11-08T00:20:47.381095750Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.12591045s" Nov 8 00:20:47.381882 containerd[1591]: time="2025-11-08T00:20:47.381199937Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:20:47.385020 containerd[1591]: time="2025-11-08T00:20:47.384985606Z" level=info msg="CreateContainer within sandbox \"0e3b9c10e0137ee5e6f26c696037be58d4264cec4bb8fd74173e1afa01b878aa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:20:47.400853 containerd[1591]: time="2025-11-08T00:20:47.400807019Z" level=info msg="CreateContainer within sandbox \"0e3b9c10e0137ee5e6f26c696037be58d4264cec4bb8fd74173e1afa01b878aa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"78679e2133d410a8746e2ff1d8541b310b25bfe537a716ac00a7003fb374b326\"" Nov 8 00:20:47.402820 containerd[1591]: time="2025-11-08T00:20:47.401837734Z" level=info msg="StartContainer for \"78679e2133d410a8746e2ff1d8541b310b25bfe537a716ac00a7003fb374b326\"" Nov 8 00:20:47.473378 containerd[1591]: time="2025-11-08T00:20:47.473012815Z" level=info msg="StartContainer for \"78679e2133d410a8746e2ff1d8541b310b25bfe537a716ac00a7003fb374b326\" returns successfully" Nov 8 00:20:48.041304 kubelet[2695]: E1108 00:20:48.040975 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:48.368274 kubelet[2695]: E1108 00:20:48.367491 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:48.384500 kubelet[2695]: I1108 00:20:48.383534 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-ln9gx" podStartSLOduration=2.253233594 podStartE2EDuration="4.383516143s" podCreationTimestamp="2025-11-08 00:20:44 +0000 UTC" firstStartedPulling="2025-11-08 00:20:45.251783456 +0000 UTC m=+6.168551726" lastFinishedPulling="2025-11-08 00:20:47.382066048 +0000 UTC m=+8.298834275" observedRunningTime="2025-11-08 00:20:48.381846137 +0000 UTC m=+9.298614370" watchObservedRunningTime="2025-11-08 00:20:48.383516143 +0000 UTC m=+9.300284376" Nov 8 00:20:49.372395 kubelet[2695]: E1108 00:20:49.371266 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:49.442286 kubelet[2695]: E1108 00:20:49.440359 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:50.376266 kubelet[2695]: E1108 00:20:50.376219 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:20:51.509814 update_engine[1568]: I20251108 00:20:51.509718 1568 update_attempter.cc:509] Updating boot flags... Nov 8 00:20:51.568514 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (3064) Nov 8 00:20:51.651488 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (3068) Nov 8 00:20:54.447782 sudo[1804]: pam_unix(sudo:session): session closed for user root Nov 8 00:20:54.455853 sshd[1798]: pam_unix(sshd:session): session closed for user core Nov 8 00:20:54.466993 systemd[1]: sshd@6-64.23.144.43:22-139.178.68.195:35372.service: Deactivated successfully. Nov 8 00:20:54.473136 systemd-logind[1566]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:20:54.474442 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:20:54.478841 systemd-logind[1566]: Removed session 7. Nov 8 00:21:00.839992 kubelet[2695]: I1108 00:21:00.839196 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0ab63b5b-6d41-4751-8107-cac320d8823c-typha-certs\") pod \"calico-typha-77965d89cd-hf6xk\" (UID: \"0ab63b5b-6d41-4751-8107-cac320d8823c\") " pod="calico-system/calico-typha-77965d89cd-hf6xk" Nov 8 00:21:00.839992 kubelet[2695]: I1108 00:21:00.839814 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwqx2\" (UniqueName: \"kubernetes.io/projected/0ab63b5b-6d41-4751-8107-cac320d8823c-kube-api-access-zwqx2\") pod \"calico-typha-77965d89cd-hf6xk\" (UID: \"0ab63b5b-6d41-4751-8107-cac320d8823c\") " pod="calico-system/calico-typha-77965d89cd-hf6xk" Nov 8 00:21:00.839992 kubelet[2695]: I1108 00:21:00.839844 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ab63b5b-6d41-4751-8107-cac320d8823c-tigera-ca-bundle\") pod \"calico-typha-77965d89cd-hf6xk\" (UID: \"0ab63b5b-6d41-4751-8107-cac320d8823c\") " pod="calico-system/calico-typha-77965d89cd-hf6xk" Nov 8 00:21:01.041124 kubelet[2695]: I1108 00:21:01.041004 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3c9a7f76-0d0a-4939-a961-7ec233fc8079-var-run-calico\") pod \"calico-node-2dmbl\" (UID: \"3c9a7f76-0d0a-4939-a961-7ec233fc8079\") " pod="calico-system/calico-node-2dmbl" Nov 8 00:21:01.041349 kubelet[2695]: I1108 00:21:01.041142 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3c9a7f76-0d0a-4939-a961-7ec233fc8079-cni-bin-dir\") pod \"calico-node-2dmbl\" (UID: \"3c9a7f76-0d0a-4939-a961-7ec233fc8079\") " pod="calico-system/calico-node-2dmbl" Nov 8 00:21:01.041349 kubelet[2695]: I1108 00:21:01.041181 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3c9a7f76-0d0a-4939-a961-7ec233fc8079-flexvol-driver-host\") pod \"calico-node-2dmbl\" (UID: \"3c9a7f76-0d0a-4939-a961-7ec233fc8079\") " pod="calico-system/calico-node-2dmbl" Nov 8 00:21:01.041349 kubelet[2695]: I1108 00:21:01.041199 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c9a7f76-0d0a-4939-a961-7ec233fc8079-lib-modules\") pod \"calico-node-2dmbl\" (UID: \"3c9a7f76-0d0a-4939-a961-7ec233fc8079\") " pod="calico-system/calico-node-2dmbl" Nov 8 00:21:01.041349 kubelet[2695]: I1108 00:21:01.041218 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3c9a7f76-0d0a-4939-a961-7ec233fc8079-var-lib-calico\") pod \"calico-node-2dmbl\" (UID: \"3c9a7f76-0d0a-4939-a961-7ec233fc8079\") " pod="calico-system/calico-node-2dmbl" Nov 8 00:21:01.041349 kubelet[2695]: I1108 00:21:01.041244 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3c9a7f76-0d0a-4939-a961-7ec233fc8079-cni-log-dir\") pod \"calico-node-2dmbl\" (UID: \"3c9a7f76-0d0a-4939-a961-7ec233fc8079\") " pod="calico-system/calico-node-2dmbl" Nov 8 00:21:01.041650 kubelet[2695]: I1108 00:21:01.041267 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3c9a7f76-0d0a-4939-a961-7ec233fc8079-node-certs\") pod \"calico-node-2dmbl\" (UID: \"3c9a7f76-0d0a-4939-a961-7ec233fc8079\") " pod="calico-system/calico-node-2dmbl" Nov 8 00:21:01.041650 kubelet[2695]: I1108 00:21:01.041282 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c9a7f76-0d0a-4939-a961-7ec233fc8079-tigera-ca-bundle\") pod \"calico-node-2dmbl\" (UID: \"3c9a7f76-0d0a-4939-a961-7ec233fc8079\") " pod="calico-system/calico-node-2dmbl" Nov 8 00:21:01.041650 kubelet[2695]: I1108 00:21:01.041299 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3c9a7f76-0d0a-4939-a961-7ec233fc8079-cni-net-dir\") pod \"calico-node-2dmbl\" (UID: \"3c9a7f76-0d0a-4939-a961-7ec233fc8079\") " pod="calico-system/calico-node-2dmbl" Nov 8 00:21:01.041650 kubelet[2695]: I1108 00:21:01.041312 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3c9a7f76-0d0a-4939-a961-7ec233fc8079-policysync\") pod \"calico-node-2dmbl\" (UID: \"3c9a7f76-0d0a-4939-a961-7ec233fc8079\") " pod="calico-system/calico-node-2dmbl" Nov 8 00:21:01.041650 kubelet[2695]: I1108 00:21:01.041374 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c9a7f76-0d0a-4939-a961-7ec233fc8079-xtables-lock\") pod \"calico-node-2dmbl\" (UID: \"3c9a7f76-0d0a-4939-a961-7ec233fc8079\") " pod="calico-system/calico-node-2dmbl" Nov 8 00:21:01.041894 kubelet[2695]: I1108 00:21:01.041401 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thj5t\" (UniqueName: \"kubernetes.io/projected/3c9a7f76-0d0a-4939-a961-7ec233fc8079-kube-api-access-thj5t\") pod \"calico-node-2dmbl\" (UID: \"3c9a7f76-0d0a-4939-a961-7ec233fc8079\") " pod="calico-system/calico-node-2dmbl" Nov 8 00:21:01.108699 kubelet[2695]: E1108 00:21:01.106977 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:01.109848 containerd[1591]: time="2025-11-08T00:21:01.109651354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77965d89cd-hf6xk,Uid:0ab63b5b-6d41-4751-8107-cac320d8823c,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:01.151508 kubelet[2695]: E1108 00:21:01.148801 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2mnck" podUID="f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384" Nov 8 00:21:01.161124 kubelet[2695]: E1108 00:21:01.160993 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.161124 kubelet[2695]: W1108 00:21:01.161028 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.175375 kubelet[2695]: E1108 00:21:01.175252 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.176550 kubelet[2695]: W1108 00:21:01.176514 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.177844 kubelet[2695]: E1108 00:21:01.177687 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.180230 kubelet[2695]: E1108 00:21:01.179434 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.181038 kubelet[2695]: E1108 00:21:01.180969 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.181038 kubelet[2695]: W1108 00:21:01.180991 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.181038 kubelet[2695]: E1108 00:21:01.181032 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.186808 kubelet[2695]: E1108 00:21:01.185831 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.186808 kubelet[2695]: W1108 00:21:01.185864 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.186808 kubelet[2695]: E1108 00:21:01.185892 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.192790 kubelet[2695]: E1108 00:21:01.191451 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.192790 kubelet[2695]: W1108 00:21:01.192551 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.192790 kubelet[2695]: E1108 00:21:01.192579 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.203084 containerd[1591]: time="2025-11-08T00:21:01.202814526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:01.203084 containerd[1591]: time="2025-11-08T00:21:01.202874616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:01.203084 containerd[1591]: time="2025-11-08T00:21:01.202899264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:01.204646 containerd[1591]: time="2025-11-08T00:21:01.204408322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:01.216123 kubelet[2695]: E1108 00:21:01.213662 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.216123 kubelet[2695]: W1108 00:21:01.213697 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.216123 kubelet[2695]: E1108 00:21:01.213725 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.216123 kubelet[2695]: E1108 00:21:01.215109 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.216123 kubelet[2695]: W1108 00:21:01.215128 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.216123 kubelet[2695]: E1108 00:21:01.215150 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.231606 kubelet[2695]: E1108 00:21:01.231559 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.231606 kubelet[2695]: W1108 00:21:01.231600 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.231838 kubelet[2695]: E1108 00:21:01.231696 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.234422 kubelet[2695]: E1108 00:21:01.234372 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.234422 kubelet[2695]: W1108 00:21:01.234408 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.234665 kubelet[2695]: E1108 00:21:01.234437 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.239782 kubelet[2695]: E1108 00:21:01.238108 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.239782 kubelet[2695]: W1108 00:21:01.238138 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.239782 kubelet[2695]: E1108 00:21:01.238166 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.241439 kubelet[2695]: E1108 00:21:01.241399 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.241439 kubelet[2695]: W1108 00:21:01.241435 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.242826 kubelet[2695]: E1108 00:21:01.242664 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.247264 kubelet[2695]: E1108 00:21:01.247231 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.247264 kubelet[2695]: W1108 00:21:01.247260 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.247721 kubelet[2695]: E1108 00:21:01.247287 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.249745 kubelet[2695]: E1108 00:21:01.249166 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.249745 kubelet[2695]: W1108 00:21:01.249192 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.249745 kubelet[2695]: E1108 00:21:01.249217 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.251663 kubelet[2695]: E1108 00:21:01.251301 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.251795 kubelet[2695]: W1108 00:21:01.251665 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.251795 kubelet[2695]: E1108 00:21:01.251691 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.253504 kubelet[2695]: E1108 00:21:01.253432 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.253767 kubelet[2695]: W1108 00:21:01.253468 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.253767 kubelet[2695]: E1108 00:21:01.253711 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.259720 kubelet[2695]: E1108 00:21:01.259677 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.259720 kubelet[2695]: W1108 00:21:01.259712 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.259908 kubelet[2695]: E1108 00:21:01.259742 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.263941 kubelet[2695]: E1108 00:21:01.263898 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.263941 kubelet[2695]: W1108 00:21:01.263935 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.264165 kubelet[2695]: E1108 00:21:01.263962 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.268587 kubelet[2695]: E1108 00:21:01.268548 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.268587 kubelet[2695]: W1108 00:21:01.268580 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.268766 kubelet[2695]: E1108 00:21:01.268612 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.271009 kubelet[2695]: E1108 00:21:01.270968 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.271009 kubelet[2695]: W1108 00:21:01.271000 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.271256 kubelet[2695]: E1108 00:21:01.271028 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.276669 kubelet[2695]: E1108 00:21:01.276573 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.276669 kubelet[2695]: W1108 00:21:01.276640 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.276669 kubelet[2695]: E1108 00:21:01.276675 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.282217 kubelet[2695]: E1108 00:21:01.282179 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.282217 kubelet[2695]: W1108 00:21:01.282216 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.282388 kubelet[2695]: E1108 00:21:01.282242 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.284185 kubelet[2695]: E1108 00:21:01.284153 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.284185 kubelet[2695]: W1108 00:21:01.284180 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.284383 kubelet[2695]: E1108 00:21:01.284361 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.286666 kubelet[2695]: E1108 00:21:01.286629 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.286666 kubelet[2695]: W1108 00:21:01.286654 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.286851 kubelet[2695]: E1108 00:21:01.286686 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.287933 kubelet[2695]: E1108 00:21:01.287897 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.287933 kubelet[2695]: W1108 00:21:01.287926 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.288529 kubelet[2695]: E1108 00:21:01.287947 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.289675 kubelet[2695]: E1108 00:21:01.289651 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.289675 kubelet[2695]: W1108 00:21:01.289674 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.289803 kubelet[2695]: E1108 00:21:01.289693 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.291777 kubelet[2695]: E1108 00:21:01.291752 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.291777 kubelet[2695]: W1108 00:21:01.291775 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.291892 kubelet[2695]: E1108 00:21:01.291813 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.293641 kubelet[2695]: E1108 00:21:01.293610 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.293641 kubelet[2695]: W1108 00:21:01.293633 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.293757 kubelet[2695]: E1108 00:21:01.293659 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.295735 kubelet[2695]: E1108 00:21:01.295709 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.295735 kubelet[2695]: W1108 00:21:01.295733 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.295891 kubelet[2695]: E1108 00:21:01.295753 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.295891 kubelet[2695]: I1108 00:21:01.295784 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384-kubelet-dir\") pod \"csi-node-driver-2mnck\" (UID: \"f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384\") " pod="calico-system/csi-node-driver-2mnck" Nov 8 00:21:01.297563 kubelet[2695]: E1108 00:21:01.297532 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:01.300655 kubelet[2695]: E1108 00:21:01.300610 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.300655 kubelet[2695]: W1108 00:21:01.300650 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.300843 kubelet[2695]: E1108 00:21:01.300681 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.300843 kubelet[2695]: I1108 00:21:01.300726 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384-registration-dir\") pod \"csi-node-driver-2mnck\" (UID: \"f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384\") " pod="calico-system/csi-node-driver-2mnck" Nov 8 00:21:01.302607 kubelet[2695]: E1108 00:21:01.302576 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.302607 kubelet[2695]: W1108 00:21:01.302602 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.302739 kubelet[2695]: E1108 00:21:01.302627 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.302739 kubelet[2695]: I1108 00:21:01.302657 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384-varrun\") pod \"csi-node-driver-2mnck\" (UID: \"f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384\") " pod="calico-system/csi-node-driver-2mnck" Nov 8 00:21:01.305221 containerd[1591]: time="2025-11-08T00:21:01.304870683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2dmbl,Uid:3c9a7f76-0d0a-4939-a961-7ec233fc8079,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:01.305626 kubelet[2695]: E1108 00:21:01.305589 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.305626 kubelet[2695]: W1108 00:21:01.305618 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.305725 kubelet[2695]: E1108 00:21:01.305644 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.305725 kubelet[2695]: I1108 00:21:01.305685 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfpxv\" (UniqueName: \"kubernetes.io/projected/f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384-kube-api-access-wfpxv\") pod \"csi-node-driver-2mnck\" (UID: \"f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384\") " pod="calico-system/csi-node-driver-2mnck" Nov 8 00:21:01.312825 kubelet[2695]: E1108 00:21:01.312691 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.312825 kubelet[2695]: W1108 00:21:01.312723 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.312825 kubelet[2695]: E1108 00:21:01.312750 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.312825 kubelet[2695]: I1108 00:21:01.312788 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384-socket-dir\") pod \"csi-node-driver-2mnck\" (UID: \"f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384\") " pod="calico-system/csi-node-driver-2mnck" Nov 8 00:21:01.314522 kubelet[2695]: E1108 00:21:01.313142 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.314522 kubelet[2695]: W1108 00:21:01.313158 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.314522 kubelet[2695]: E1108 00:21:01.313179 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.317380 kubelet[2695]: E1108 00:21:01.316575 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.317380 kubelet[2695]: W1108 00:21:01.316603 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.317380 kubelet[2695]: E1108 00:21:01.316714 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.318713 kubelet[2695]: E1108 00:21:01.318548 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.319234 kubelet[2695]: W1108 00:21:01.318572 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.321555 kubelet[2695]: E1108 00:21:01.321008 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.321555 kubelet[2695]: E1108 00:21:01.321160 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.321555 kubelet[2695]: W1108 00:21:01.321176 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.321555 kubelet[2695]: E1108 00:21:01.321369 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.324240 kubelet[2695]: E1108 00:21:01.323979 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.324240 kubelet[2695]: W1108 00:21:01.324023 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.326053 kubelet[2695]: E1108 00:21:01.325838 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.326401 kubelet[2695]: E1108 00:21:01.326333 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.326401 kubelet[2695]: W1108 00:21:01.326352 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.327069 kubelet[2695]: E1108 00:21:01.326562 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.327069 kubelet[2695]: E1108 00:21:01.326910 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.327069 kubelet[2695]: W1108 00:21:01.326922 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.327069 kubelet[2695]: E1108 00:21:01.326936 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.327871 kubelet[2695]: E1108 00:21:01.327493 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.327871 kubelet[2695]: W1108 00:21:01.327515 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.327871 kubelet[2695]: E1108 00:21:01.327533 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.327982 kubelet[2695]: E1108 00:21:01.327900 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.327982 kubelet[2695]: W1108 00:21:01.327916 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.327982 kubelet[2695]: E1108 00:21:01.327932 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.330095 kubelet[2695]: E1108 00:21:01.329887 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.330095 kubelet[2695]: W1108 00:21:01.329909 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.330095 kubelet[2695]: E1108 00:21:01.329925 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.360663 containerd[1591]: time="2025-11-08T00:21:01.360026612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:01.360663 containerd[1591]: time="2025-11-08T00:21:01.360108658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:01.360663 containerd[1591]: time="2025-11-08T00:21:01.360125124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:01.364270 containerd[1591]: time="2025-11-08T00:21:01.360255799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:01.374765 containerd[1591]: time="2025-11-08T00:21:01.374644749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77965d89cd-hf6xk,Uid:0ab63b5b-6d41-4751-8107-cac320d8823c,Namespace:calico-system,Attempt:0,} returns sandbox id \"d005bc45de70385366a65267c3ffd7b7293498a997f4b45ba0ba0eb7f090ca9b\"" Nov 8 00:21:01.380131 kubelet[2695]: E1108 00:21:01.380093 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:01.384434 containerd[1591]: time="2025-11-08T00:21:01.384216110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:21:01.414435 kubelet[2695]: E1108 00:21:01.414150 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.414435 kubelet[2695]: W1108 00:21:01.414173 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.414435 kubelet[2695]: E1108 00:21:01.414329 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.415262 kubelet[2695]: E1108 00:21:01.415014 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.415262 kubelet[2695]: W1108 00:21:01.415029 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.415262 kubelet[2695]: E1108 00:21:01.415055 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.415563 kubelet[2695]: E1108 00:21:01.415383 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.415563 kubelet[2695]: W1108 00:21:01.415394 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.416132 kubelet[2695]: E1108 00:21:01.415719 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.416132 kubelet[2695]: E1108 00:21:01.416098 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.416132 kubelet[2695]: W1108 00:21:01.416109 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.416362 kubelet[2695]: E1108 00:21:01.416314 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.416590 kubelet[2695]: E1108 00:21:01.416504 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.416634 kubelet[2695]: W1108 00:21:01.416521 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.416688 kubelet[2695]: E1108 00:21:01.416651 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.417056 kubelet[2695]: E1108 00:21:01.416923 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.417056 kubelet[2695]: W1108 00:21:01.416983 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.417056 kubelet[2695]: E1108 00:21:01.417001 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.417903 kubelet[2695]: E1108 00:21:01.417741 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.417903 kubelet[2695]: W1108 00:21:01.417761 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.417903 kubelet[2695]: E1108 00:21:01.417778 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.418808 kubelet[2695]: E1108 00:21:01.418543 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.418808 kubelet[2695]: W1108 00:21:01.418558 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.418808 kubelet[2695]: E1108 00:21:01.418596 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.418808 kubelet[2695]: E1108 00:21:01.418777 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.418808 kubelet[2695]: W1108 00:21:01.418788 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.420643 kubelet[2695]: E1108 00:21:01.419202 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.420851 kubelet[2695]: E1108 00:21:01.420836 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.421588 kubelet[2695]: W1108 00:21:01.421526 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.421588 kubelet[2695]: E1108 00:21:01.421574 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.422812 kubelet[2695]: E1108 00:21:01.422708 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.422812 kubelet[2695]: W1108 00:21:01.422725 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.423088 kubelet[2695]: E1108 00:21:01.423019 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.423829 kubelet[2695]: E1108 00:21:01.423651 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.423829 kubelet[2695]: W1108 00:21:01.423664 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.423829 kubelet[2695]: E1108 00:21:01.423713 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.424118 kubelet[2695]: E1108 00:21:01.423935 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.424118 kubelet[2695]: W1108 00:21:01.423945 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.424118 kubelet[2695]: E1108 00:21:01.424035 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.424264 kubelet[2695]: E1108 00:21:01.424254 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.424321 kubelet[2695]: W1108 00:21:01.424313 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.424455 kubelet[2695]: E1108 00:21:01.424436 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.424669 kubelet[2695]: E1108 00:21:01.424659 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.424793 kubelet[2695]: W1108 00:21:01.424712 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.424793 kubelet[2695]: E1108 00:21:01.424738 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.425680 kubelet[2695]: E1108 00:21:01.425616 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.425680 kubelet[2695]: W1108 00:21:01.425629 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.425680 kubelet[2695]: E1108 00:21:01.425656 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.426174 kubelet[2695]: E1108 00:21:01.426050 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.426174 kubelet[2695]: W1108 00:21:01.426065 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.426174 kubelet[2695]: E1108 00:21:01.426093 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.426438 kubelet[2695]: E1108 00:21:01.426378 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.426438 kubelet[2695]: W1108 00:21:01.426389 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.426594 kubelet[2695]: E1108 00:21:01.426489 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.426927 kubelet[2695]: E1108 00:21:01.426818 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.426927 kubelet[2695]: W1108 00:21:01.426830 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.426927 kubelet[2695]: E1108 00:21:01.426853 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.428376 kubelet[2695]: E1108 00:21:01.427672 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.428376 kubelet[2695]: W1108 00:21:01.427686 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.428376 kubelet[2695]: E1108 00:21:01.428290 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.428376 kubelet[2695]: W1108 00:21:01.428301 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.428654 kubelet[2695]: E1108 00:21:01.428640 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.428727 kubelet[2695]: E1108 00:21:01.428718 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.429170 kubelet[2695]: E1108 00:21:01.429152 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.429170 kubelet[2695]: W1108 00:21:01.429168 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.429293 kubelet[2695]: E1108 00:21:01.429277 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.429979 kubelet[2695]: E1108 00:21:01.429963 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.429979 kubelet[2695]: W1108 00:21:01.429976 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.431439 kubelet[2695]: E1108 00:21:01.430091 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.431439 kubelet[2695]: E1108 00:21:01.430188 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.431439 kubelet[2695]: W1108 00:21:01.430196 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.431439 kubelet[2695]: E1108 00:21:01.430207 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.431439 kubelet[2695]: E1108 00:21:01.430485 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.431439 kubelet[2695]: W1108 00:21:01.430495 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.431439 kubelet[2695]: E1108 00:21:01.430506 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:01.432754 containerd[1591]: time="2025-11-08T00:21:01.432722731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2dmbl,Uid:3c9a7f76-0d0a-4939-a961-7ec233fc8079,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5c0c67426191ee67f9bbee5bc066b0320345c2ed8f5265b0e919e36a0856490\"" Nov 8 00:21:01.434380 kubelet[2695]: E1108 00:21:01.433901 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:01.445129 kubelet[2695]: E1108 00:21:01.445095 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:01.445819 kubelet[2695]: W1108 00:21:01.445719 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:01.445819 kubelet[2695]: E1108 00:21:01.445750 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:02.886623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount595031442.mount: Deactivated successfully. Nov 8 00:21:03.309426 kubelet[2695]: E1108 00:21:03.307761 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2mnck" podUID="f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384" Nov 8 00:21:04.801510 containerd[1591]: time="2025-11-08T00:21:04.801170678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:04.802566 containerd[1591]: time="2025-11-08T00:21:04.802508135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:21:04.803712 containerd[1591]: time="2025-11-08T00:21:04.803667211Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:04.806511 containerd[1591]: time="2025-11-08T00:21:04.805875458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:04.807256 containerd[1591]: time="2025-11-08T00:21:04.806962442Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.422709661s" Nov 8 00:21:04.807256 containerd[1591]: time="2025-11-08T00:21:04.807000708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:21:04.810433 containerd[1591]: time="2025-11-08T00:21:04.809970447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:21:04.837913 containerd[1591]: time="2025-11-08T00:21:04.837159172Z" level=info msg="CreateContainer within sandbox \"d005bc45de70385366a65267c3ffd7b7293498a997f4b45ba0ba0eb7f090ca9b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:21:04.916916 containerd[1591]: time="2025-11-08T00:21:04.916870568Z" level=info msg="CreateContainer within sandbox \"d005bc45de70385366a65267c3ffd7b7293498a997f4b45ba0ba0eb7f090ca9b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a3cab08c21802b99bf73b8b36acd94103e4b947855d6409ca09e2ce5d1456a47\"" Nov 8 00:21:04.918619 containerd[1591]: time="2025-11-08T00:21:04.918020271Z" level=info msg="StartContainer for \"a3cab08c21802b99bf73b8b36acd94103e4b947855d6409ca09e2ce5d1456a47\"" Nov 8 00:21:05.042441 containerd[1591]: time="2025-11-08T00:21:05.042281239Z" level=info msg="StartContainer for \"a3cab08c21802b99bf73b8b36acd94103e4b947855d6409ca09e2ce5d1456a47\" returns successfully" Nov 8 00:21:05.307377 kubelet[2695]: E1108 00:21:05.307305 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2mnck" podUID="f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384" Nov 8 00:21:05.445513 kubelet[2695]: E1108 00:21:05.444531 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:05.533006 kubelet[2695]: E1108 00:21:05.532944 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.533261 kubelet[2695]: W1108 00:21:05.533242 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.533363 kubelet[2695]: E1108 00:21:05.533350 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.533699 kubelet[2695]: E1108 00:21:05.533677 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.533816 kubelet[2695]: W1108 00:21:05.533803 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.533953 kubelet[2695]: E1108 00:21:05.533873 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.534431 kubelet[2695]: E1108 00:21:05.534413 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.534611 kubelet[2695]: W1108 00:21:05.534483 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.534611 kubelet[2695]: E1108 00:21:05.534499 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.535037 kubelet[2695]: E1108 00:21:05.534898 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.535037 kubelet[2695]: W1108 00:21:05.534920 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.535037 kubelet[2695]: E1108 00:21:05.534932 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.536784 kubelet[2695]: E1108 00:21:05.535524 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.536784 kubelet[2695]: W1108 00:21:05.535537 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.536784 kubelet[2695]: E1108 00:21:05.535563 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.537219 kubelet[2695]: E1108 00:21:05.537107 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.537219 kubelet[2695]: W1108 00:21:05.537120 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.537219 kubelet[2695]: E1108 00:21:05.537143 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.537577 kubelet[2695]: E1108 00:21:05.537442 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.537577 kubelet[2695]: W1108 00:21:05.537452 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.537577 kubelet[2695]: E1108 00:21:05.537503 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.537961 kubelet[2695]: E1108 00:21:05.537836 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.537961 kubelet[2695]: W1108 00:21:05.537857 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.537961 kubelet[2695]: E1108 00:21:05.537869 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.538325 kubelet[2695]: E1108 00:21:05.538272 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.538325 kubelet[2695]: W1108 00:21:05.538283 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.538325 kubelet[2695]: E1108 00:21:05.538294 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.538846 kubelet[2695]: E1108 00:21:05.538661 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.538846 kubelet[2695]: W1108 00:21:05.538673 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.538846 kubelet[2695]: E1108 00:21:05.538684 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.539278 kubelet[2695]: E1108 00:21:05.539163 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.539278 kubelet[2695]: W1108 00:21:05.539175 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.539278 kubelet[2695]: E1108 00:21:05.539204 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.539620 kubelet[2695]: E1108 00:21:05.539525 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.539620 kubelet[2695]: W1108 00:21:05.539536 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.539620 kubelet[2695]: E1108 00:21:05.539546 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.539900 kubelet[2695]: E1108 00:21:05.539834 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.539900 kubelet[2695]: W1108 00:21:05.539845 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.539900 kubelet[2695]: E1108 00:21:05.539855 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.540236 kubelet[2695]: E1108 00:21:05.540146 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.540236 kubelet[2695]: W1108 00:21:05.540156 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.540236 kubelet[2695]: E1108 00:21:05.540165 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.540556 kubelet[2695]: E1108 00:21:05.540445 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.540556 kubelet[2695]: W1108 00:21:05.540455 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.540556 kubelet[2695]: E1108 00:21:05.540496 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.555897 kubelet[2695]: E1108 00:21:05.555830 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.555897 kubelet[2695]: W1108 00:21:05.555857 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.556228 kubelet[2695]: E1108 00:21:05.556080 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.556558 kubelet[2695]: E1108 00:21:05.556499 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.556558 kubelet[2695]: W1108 00:21:05.556514 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.556558 kubelet[2695]: E1108 00:21:05.556536 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.557950 kubelet[2695]: E1108 00:21:05.557701 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.557950 kubelet[2695]: W1108 00:21:05.557723 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.557950 kubelet[2695]: E1108 00:21:05.557742 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.559537 kubelet[2695]: E1108 00:21:05.559323 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.559537 kubelet[2695]: W1108 00:21:05.559338 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.559537 kubelet[2695]: E1108 00:21:05.559506 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.560163 kubelet[2695]: E1108 00:21:05.559724 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.560163 kubelet[2695]: W1108 00:21:05.559738 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.560163 kubelet[2695]: E1108 00:21:05.559915 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.562431 kubelet[2695]: E1108 00:21:05.561071 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.562431 kubelet[2695]: W1108 00:21:05.561086 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.562431 kubelet[2695]: E1108 00:21:05.561157 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.562878 kubelet[2695]: E1108 00:21:05.562763 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.562878 kubelet[2695]: W1108 00:21:05.562782 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.562878 kubelet[2695]: E1108 00:21:05.562857 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.563662 kubelet[2695]: E1108 00:21:05.563550 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.563662 kubelet[2695]: W1108 00:21:05.563574 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.563662 kubelet[2695]: E1108 00:21:05.563604 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.564116 kubelet[2695]: E1108 00:21:05.563886 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.564116 kubelet[2695]: W1108 00:21:05.563896 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.564116 kubelet[2695]: E1108 00:21:05.563937 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.564483 kubelet[2695]: E1108 00:21:05.564427 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.564616 kubelet[2695]: W1108 00:21:05.564577 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.564664 kubelet[2695]: E1108 00:21:05.564615 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.565317 kubelet[2695]: E1108 00:21:05.565078 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.565317 kubelet[2695]: W1108 00:21:05.565091 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.565317 kubelet[2695]: E1108 00:21:05.565107 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.565697 kubelet[2695]: E1108 00:21:05.565684 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.566479 kubelet[2695]: W1108 00:21:05.565832 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.566479 kubelet[2695]: E1108 00:21:05.565882 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.567805 kubelet[2695]: E1108 00:21:05.567668 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.567805 kubelet[2695]: W1108 00:21:05.567683 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.567805 kubelet[2695]: E1108 00:21:05.567716 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.568058 kubelet[2695]: E1108 00:21:05.568047 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.568192 kubelet[2695]: W1108 00:21:05.568102 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.568192 kubelet[2695]: E1108 00:21:05.568136 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.568597 kubelet[2695]: E1108 00:21:05.568502 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.568843 kubelet[2695]: W1108 00:21:05.568654 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.569105 kubelet[2695]: E1108 00:21:05.569015 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.569170 kubelet[2695]: W1108 00:21:05.569158 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.569532 kubelet[2695]: E1108 00:21:05.569219 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.570146 kubelet[2695]: E1108 00:21:05.570134 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.570232 kubelet[2695]: W1108 00:21:05.570220 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.570668 kubelet[2695]: E1108 00:21:05.570612 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.570668 kubelet[2695]: E1108 00:21:05.570651 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:05.571519 kubelet[2695]: E1108 00:21:05.571505 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:05.571640 kubelet[2695]: W1108 00:21:05.571595 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:05.571640 kubelet[2695]: E1108 00:21:05.571613 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:06.149325 containerd[1591]: time="2025-11-08T00:21:06.148760556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:06.149994 containerd[1591]: time="2025-11-08T00:21:06.149731122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:21:06.150619 containerd[1591]: time="2025-11-08T00:21:06.150554727Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:06.153750 containerd[1591]: time="2025-11-08T00:21:06.153696855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:06.154829 containerd[1591]: time="2025-11-08T00:21:06.154682739Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.344681018s" Nov 8 00:21:06.154829 containerd[1591]: time="2025-11-08T00:21:06.154722072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:21:06.159332 containerd[1591]: time="2025-11-08T00:21:06.159231314Z" level=info msg="CreateContainer within sandbox \"f5c0c67426191ee67f9bbee5bc066b0320345c2ed8f5265b0e919e36a0856490\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:21:06.175380 containerd[1591]: time="2025-11-08T00:21:06.175211054Z" level=info msg="CreateContainer within sandbox \"f5c0c67426191ee67f9bbee5bc066b0320345c2ed8f5265b0e919e36a0856490\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5c281461f5dfc4ced3f74a58ce0bd371cf73196ec9e0ea3b091bafd28eee6e64\"" Nov 8 00:21:06.176733 containerd[1591]: time="2025-11-08T00:21:06.176674040Z" level=info msg="StartContainer for \"5c281461f5dfc4ced3f74a58ce0bd371cf73196ec9e0ea3b091bafd28eee6e64\"" Nov 8 00:21:06.237556 systemd[1]: run-containerd-runc-k8s.io-5c281461f5dfc4ced3f74a58ce0bd371cf73196ec9e0ea3b091bafd28eee6e64-runc.aaRmTH.mount: Deactivated successfully. Nov 8 00:21:06.289656 containerd[1591]: time="2025-11-08T00:21:06.289602007Z" level=info msg="StartContainer for \"5c281461f5dfc4ced3f74a58ce0bd371cf73196ec9e0ea3b091bafd28eee6e64\" returns successfully" Nov 8 00:21:06.364415 containerd[1591]: time="2025-11-08T00:21:06.337329018Z" level=info msg="shim disconnected" id=5c281461f5dfc4ced3f74a58ce0bd371cf73196ec9e0ea3b091bafd28eee6e64 namespace=k8s.io Nov 8 00:21:06.364415 containerd[1591]: time="2025-11-08T00:21:06.364207440Z" level=warning msg="cleaning up after shim disconnected" id=5c281461f5dfc4ced3f74a58ce0bd371cf73196ec9e0ea3b091bafd28eee6e64 namespace=k8s.io Nov 8 00:21:06.364415 containerd[1591]: time="2025-11-08T00:21:06.364225197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:21:06.449105 kubelet[2695]: E1108 00:21:06.449068 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:06.452005 kubelet[2695]: E1108 00:21:06.449497 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:06.455850 containerd[1591]: time="2025-11-08T00:21:06.454660617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:21:06.484907 kubelet[2695]: I1108 00:21:06.484799 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77965d89cd-hf6xk" podStartSLOduration=3.056660748 podStartE2EDuration="6.482690814s" podCreationTimestamp="2025-11-08 00:21:00 +0000 UTC" firstStartedPulling="2025-11-08 00:21:01.383614597 +0000 UTC m=+22.300382813" lastFinishedPulling="2025-11-08 00:21:04.809644654 +0000 UTC m=+25.726412879" observedRunningTime="2025-11-08 00:21:05.51604475 +0000 UTC m=+26.432812983" watchObservedRunningTime="2025-11-08 00:21:06.482690814 +0000 UTC m=+27.399459054" Nov 8 00:21:06.827558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c281461f5dfc4ced3f74a58ce0bd371cf73196ec9e0ea3b091bafd28eee6e64-rootfs.mount: Deactivated successfully. Nov 8 00:21:07.308514 kubelet[2695]: E1108 00:21:07.307318 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2mnck" podUID="f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384" Nov 8 00:21:07.451172 kubelet[2695]: E1108 00:21:07.451139 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:08.454745 kubelet[2695]: E1108 00:21:08.454701 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:09.308495 kubelet[2695]: E1108 00:21:09.308250 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2mnck" podUID="f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384" Nov 8 00:21:10.331607 containerd[1591]: time="2025-11-08T00:21:10.331550353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:10.333011 containerd[1591]: time="2025-11-08T00:21:10.332430865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:21:10.333011 containerd[1591]: time="2025-11-08T00:21:10.332966956Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:10.335829 containerd[1591]: time="2025-11-08T00:21:10.335738736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:10.337358 containerd[1591]: time="2025-11-08T00:21:10.337051359Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.881182432s" Nov 8 00:21:10.337358 containerd[1591]: time="2025-11-08T00:21:10.337114007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:21:10.341043 containerd[1591]: time="2025-11-08T00:21:10.340989878Z" level=info msg="CreateContainer within sandbox \"f5c0c67426191ee67f9bbee5bc066b0320345c2ed8f5265b0e919e36a0856490\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:21:10.365085 containerd[1591]: time="2025-11-08T00:21:10.364352858Z" level=info msg="CreateContainer within sandbox \"f5c0c67426191ee67f9bbee5bc066b0320345c2ed8f5265b0e919e36a0856490\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e0992e1b30c013bc4e521671fe1d4f00b58b51149cb4c7ff460a921ddcd4006c\"" Nov 8 00:21:10.365706 containerd[1591]: time="2025-11-08T00:21:10.365601452Z" level=info msg="StartContainer for \"e0992e1b30c013bc4e521671fe1d4f00b58b51149cb4c7ff460a921ddcd4006c\"" Nov 8 00:21:10.527863 containerd[1591]: time="2025-11-08T00:21:10.527776556Z" level=info msg="StartContainer for \"e0992e1b30c013bc4e521671fe1d4f00b58b51149cb4c7ff460a921ddcd4006c\" returns successfully" Nov 8 00:21:11.219400 kubelet[2695]: I1108 00:21:11.218953 2695 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:21:11.244058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0992e1b30c013bc4e521671fe1d4f00b58b51149cb4c7ff460a921ddcd4006c-rootfs.mount: Deactivated successfully. Nov 8 00:21:11.246194 containerd[1591]: time="2025-11-08T00:21:11.245262394Z" level=info msg="shim disconnected" id=e0992e1b30c013bc4e521671fe1d4f00b58b51149cb4c7ff460a921ddcd4006c namespace=k8s.io Nov 8 00:21:11.246194 containerd[1591]: time="2025-11-08T00:21:11.245318063Z" level=warning msg="cleaning up after shim disconnected" id=e0992e1b30c013bc4e521671fe1d4f00b58b51149cb4c7ff460a921ddcd4006c namespace=k8s.io Nov 8 00:21:11.246194 containerd[1591]: time="2025-11-08T00:21:11.245327340Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:21:11.268876 containerd[1591]: time="2025-11-08T00:21:11.267514333Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:21:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:21:11.366062 containerd[1591]: time="2025-11-08T00:21:11.366020255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2mnck,Uid:f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:11.410271 kubelet[2695]: I1108 00:21:11.407102 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz5wp\" (UniqueName: \"kubernetes.io/projected/8d5058f1-2a34-4b46-bc5b-60d93e86f9f4-kube-api-access-vz5wp\") pod \"calico-apiserver-5b65b9d44c-ld5bj\" (UID: \"8d5058f1-2a34-4b46-bc5b-60d93e86f9f4\") " pod="calico-apiserver/calico-apiserver-5b65b9d44c-ld5bj" Nov 8 00:21:11.410271 kubelet[2695]: I1108 00:21:11.407158 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8d5058f1-2a34-4b46-bc5b-60d93e86f9f4-calico-apiserver-certs\") pod \"calico-apiserver-5b65b9d44c-ld5bj\" (UID: \"8d5058f1-2a34-4b46-bc5b-60d93e86f9f4\") " pod="calico-apiserver/calico-apiserver-5b65b9d44c-ld5bj" Nov 8 00:21:11.410271 kubelet[2695]: I1108 00:21:11.407177 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/571339fa-a980-4274-be42-77b940705c5d-tigera-ca-bundle\") pod \"calico-kube-controllers-78689fc948-mm7k2\" (UID: \"571339fa-a980-4274-be42-77b940705c5d\") " pod="calico-system/calico-kube-controllers-78689fc948-mm7k2" Nov 8 00:21:11.410271 kubelet[2695]: I1108 00:21:11.407194 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95h7n\" (UniqueName: \"kubernetes.io/projected/571339fa-a980-4274-be42-77b940705c5d-kube-api-access-95h7n\") pod \"calico-kube-controllers-78689fc948-mm7k2\" (UID: \"571339fa-a980-4274-be42-77b940705c5d\") " pod="calico-system/calico-kube-controllers-78689fc948-mm7k2" Nov 8 00:21:11.410271 kubelet[2695]: I1108 00:21:11.407217 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aed9a615-02c1-40d6-81ad-65033e8e154c-config-volume\") pod \"coredns-668d6bf9bc-5blmv\" (UID: \"aed9a615-02c1-40d6-81ad-65033e8e154c\") " pod="kube-system/coredns-668d6bf9bc-5blmv" Nov 8 00:21:11.410632 kubelet[2695]: I1108 00:21:11.407264 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5-calico-apiserver-certs\") pod \"calico-apiserver-5b65b9d44c-vc5w2\" (UID: \"e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5\") " pod="calico-apiserver/calico-apiserver-5b65b9d44c-vc5w2" Nov 8 00:21:11.410632 kubelet[2695]: I1108 00:21:11.407280 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b85782e-ef51-43c2-92d6-7721ec39bac1-config-volume\") pod \"coredns-668d6bf9bc-cbp7m\" (UID: \"5b85782e-ef51-43c2-92d6-7721ec39bac1\") " pod="kube-system/coredns-668d6bf9bc-cbp7m" Nov 8 00:21:11.410632 kubelet[2695]: I1108 00:21:11.407299 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gr6w\" (UniqueName: \"kubernetes.io/projected/5b85782e-ef51-43c2-92d6-7721ec39bac1-kube-api-access-6gr6w\") pod \"coredns-668d6bf9bc-cbp7m\" (UID: \"5b85782e-ef51-43c2-92d6-7721ec39bac1\") " pod="kube-system/coredns-668d6bf9bc-cbp7m" Nov 8 00:21:11.410632 kubelet[2695]: I1108 00:21:11.407316 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e197bac-6071-4052-8e5a-3a64d2035a47-goldmane-ca-bundle\") pod \"goldmane-666569f655-gs456\" (UID: \"6e197bac-6071-4052-8e5a-3a64d2035a47\") " pod="calico-system/goldmane-666569f655-gs456" Nov 8 00:21:11.410632 kubelet[2695]: I1108 00:21:11.407332 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh8fq\" (UniqueName: \"kubernetes.io/projected/5fbe194b-f5c0-4f62-87a6-c191b1791ac3-kube-api-access-zh8fq\") pod \"whisker-6478bcb995-55zjb\" (UID: \"5fbe194b-f5c0-4f62-87a6-c191b1791ac3\") " pod="calico-system/whisker-6478bcb995-55zjb" Nov 8 00:21:11.410776 kubelet[2695]: I1108 00:21:11.407354 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5fbe194b-f5c0-4f62-87a6-c191b1791ac3-whisker-backend-key-pair\") pod \"whisker-6478bcb995-55zjb\" (UID: \"5fbe194b-f5c0-4f62-87a6-c191b1791ac3\") " pod="calico-system/whisker-6478bcb995-55zjb" Nov 8 00:21:11.410776 kubelet[2695]: I1108 00:21:11.407373 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fbe194b-f5c0-4f62-87a6-c191b1791ac3-whisker-ca-bundle\") pod \"whisker-6478bcb995-55zjb\" (UID: \"5fbe194b-f5c0-4f62-87a6-c191b1791ac3\") " pod="calico-system/whisker-6478bcb995-55zjb" Nov 8 00:21:11.410776 kubelet[2695]: I1108 00:21:11.407393 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e197bac-6071-4052-8e5a-3a64d2035a47-config\") pod \"goldmane-666569f655-gs456\" (UID: \"6e197bac-6071-4052-8e5a-3a64d2035a47\") " pod="calico-system/goldmane-666569f655-gs456" Nov 8 00:21:11.410776 kubelet[2695]: I1108 00:21:11.407410 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxthx\" (UniqueName: \"kubernetes.io/projected/e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5-kube-api-access-rxthx\") pod \"calico-apiserver-5b65b9d44c-vc5w2\" (UID: \"e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5\") " pod="calico-apiserver/calico-apiserver-5b65b9d44c-vc5w2" Nov 8 00:21:11.410776 kubelet[2695]: I1108 00:21:11.407428 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6e197bac-6071-4052-8e5a-3a64d2035a47-goldmane-key-pair\") pod \"goldmane-666569f655-gs456\" (UID: \"6e197bac-6071-4052-8e5a-3a64d2035a47\") " pod="calico-system/goldmane-666569f655-gs456" Nov 8 00:21:11.410920 kubelet[2695]: I1108 00:21:11.407446 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbgw2\" (UniqueName: \"kubernetes.io/projected/6e197bac-6071-4052-8e5a-3a64d2035a47-kube-api-access-lbgw2\") pod \"goldmane-666569f655-gs456\" (UID: \"6e197bac-6071-4052-8e5a-3a64d2035a47\") " pod="calico-system/goldmane-666569f655-gs456" Nov 8 00:21:11.414485 kubelet[2695]: I1108 00:21:11.412552 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5rjt\" (UniqueName: \"kubernetes.io/projected/aed9a615-02c1-40d6-81ad-65033e8e154c-kube-api-access-n5rjt\") pod \"coredns-668d6bf9bc-5blmv\" (UID: \"aed9a615-02c1-40d6-81ad-65033e8e154c\") " pod="kube-system/coredns-668d6bf9bc-5blmv" Nov 8 00:21:11.506875 kubelet[2695]: E1108 00:21:11.506745 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:11.535945 containerd[1591]: time="2025-11-08T00:21:11.535604944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:21:11.628150 containerd[1591]: time="2025-11-08T00:21:11.627807417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6478bcb995-55zjb,Uid:5fbe194b-f5c0-4f62-87a6-c191b1791ac3,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:11.640474 containerd[1591]: time="2025-11-08T00:21:11.640078972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78689fc948-mm7k2,Uid:571339fa-a980-4274-be42-77b940705c5d,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:11.644650 kubelet[2695]: E1108 00:21:11.643876 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:11.644796 containerd[1591]: time="2025-11-08T00:21:11.644560099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbp7m,Uid:5b85782e-ef51-43c2-92d6-7721ec39bac1,Namespace:kube-system,Attempt:0,}" Nov 8 00:21:11.649376 kubelet[2695]: E1108 00:21:11.649335 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:11.658749 containerd[1591]: time="2025-11-08T00:21:11.658661085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b65b9d44c-vc5w2,Uid:e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:21:11.659418 containerd[1591]: time="2025-11-08T00:21:11.659390147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5blmv,Uid:aed9a615-02c1-40d6-81ad-65033e8e154c,Namespace:kube-system,Attempt:0,}" Nov 8 00:21:11.678531 containerd[1591]: time="2025-11-08T00:21:11.678474718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b65b9d44c-ld5bj,Uid:8d5058f1-2a34-4b46-bc5b-60d93e86f9f4,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:21:11.686336 containerd[1591]: time="2025-11-08T00:21:11.686297273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gs456,Uid:6e197bac-6071-4052-8e5a-3a64d2035a47,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:11.969546 containerd[1591]: time="2025-11-08T00:21:11.969442339Z" level=error msg="Failed to destroy network for sandbox \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:11.975494 containerd[1591]: time="2025-11-08T00:21:11.974661802Z" level=error msg="encountered an error cleaning up failed sandbox \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:11.976478 containerd[1591]: time="2025-11-08T00:21:11.976165546Z" level=error msg="Failed to destroy network for sandbox \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:11.979020 containerd[1591]: time="2025-11-08T00:21:11.978955746Z" level=error msg="encountered an error cleaning up failed sandbox \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.022082 containerd[1591]: time="2025-11-08T00:21:12.021993026Z" level=error msg="Failed to destroy network for sandbox \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.024782 containerd[1591]: time="2025-11-08T00:21:12.024220674Z" level=error msg="Failed to destroy network for sandbox \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.024782 containerd[1591]: time="2025-11-08T00:21:12.024755012Z" level=error msg="encountered an error cleaning up failed sandbox \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.025011 containerd[1591]: time="2025-11-08T00:21:12.024866223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2mnck,Uid:f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.025527 containerd[1591]: time="2025-11-08T00:21:12.025336943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbp7m,Uid:5b85782e-ef51-43c2-92d6-7721ec39bac1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.032685 containerd[1591]: time="2025-11-08T00:21:12.031145976Z" level=error msg="Failed to destroy network for sandbox \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.032685 containerd[1591]: time="2025-11-08T00:21:12.031649736Z" level=error msg="encountered an error cleaning up failed sandbox \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.032685 containerd[1591]: time="2025-11-08T00:21:12.031711717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gs456,Uid:6e197bac-6071-4052-8e5a-3a64d2035a47,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.032685 containerd[1591]: time="2025-11-08T00:21:12.031826972Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78689fc948-mm7k2,Uid:571339fa-a980-4274-be42-77b940705c5d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.032685 containerd[1591]: time="2025-11-08T00:21:12.031979212Z" level=error msg="encountered an error cleaning up failed sandbox \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.034578 kubelet[2695]: E1108 00:21:12.033380 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.034578 kubelet[2695]: E1108 00:21:12.033509 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78689fc948-mm7k2" Nov 8 00:21:12.034578 kubelet[2695]: E1108 00:21:12.033550 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78689fc948-mm7k2" Nov 8 00:21:12.034898 kubelet[2695]: E1108 00:21:12.033621 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78689fc948-mm7k2_calico-system(571339fa-a980-4274-be42-77b940705c5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78689fc948-mm7k2_calico-system(571339fa-a980-4274-be42-77b940705c5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78689fc948-mm7k2" podUID="571339fa-a980-4274-be42-77b940705c5d" Nov 8 00:21:12.034898 kubelet[2695]: E1108 00:21:12.034006 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.034898 kubelet[2695]: E1108 00:21:12.034090 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cbp7m" Nov 8 00:21:12.035122 kubelet[2695]: E1108 00:21:12.034149 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cbp7m" Nov 8 00:21:12.035122 kubelet[2695]: E1108 00:21:12.034222 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cbp7m_kube-system(5b85782e-ef51-43c2-92d6-7721ec39bac1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cbp7m_kube-system(5b85782e-ef51-43c2-92d6-7721ec39bac1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cbp7m" podUID="5b85782e-ef51-43c2-92d6-7721ec39bac1" Nov 8 00:21:12.035122 kubelet[2695]: E1108 00:21:12.034296 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.035293 kubelet[2695]: E1108 00:21:12.034328 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2mnck" Nov 8 00:21:12.035293 kubelet[2695]: E1108 00:21:12.034381 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2mnck" Nov 8 00:21:12.035293 kubelet[2695]: E1108 00:21:12.034423 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.035437 kubelet[2695]: E1108 00:21:12.034449 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2mnck_calico-system(f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2mnck_calico-system(f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2mnck" podUID="f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384" Nov 8 00:21:12.035437 kubelet[2695]: E1108 00:21:12.034523 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-gs456" Nov 8 00:21:12.035437 kubelet[2695]: E1108 00:21:12.034549 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-gs456" Nov 8 00:21:12.035765 kubelet[2695]: E1108 00:21:12.034588 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-gs456_calico-system(6e197bac-6071-4052-8e5a-3a64d2035a47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-gs456_calico-system(6e197bac-6071-4052-8e5a-3a64d2035a47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-gs456" podUID="6e197bac-6071-4052-8e5a-3a64d2035a47" Nov 8 00:21:12.037523 containerd[1591]: time="2025-11-08T00:21:12.037309296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6478bcb995-55zjb,Uid:5fbe194b-f5c0-4f62-87a6-c191b1791ac3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.040852 kubelet[2695]: E1108 00:21:12.040795 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.041306 kubelet[2695]: E1108 00:21:12.040882 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6478bcb995-55zjb" Nov 8 00:21:12.041306 kubelet[2695]: E1108 00:21:12.040908 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6478bcb995-55zjb" Nov 8 00:21:12.041306 kubelet[2695]: E1108 00:21:12.040983 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6478bcb995-55zjb_calico-system(5fbe194b-f5c0-4f62-87a6-c191b1791ac3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6478bcb995-55zjb_calico-system(5fbe194b-f5c0-4f62-87a6-c191b1791ac3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6478bcb995-55zjb" podUID="5fbe194b-f5c0-4f62-87a6-c191b1791ac3" Nov 8 00:21:12.092735 containerd[1591]: time="2025-11-08T00:21:12.092506068Z" level=error msg="Failed to destroy network for sandbox \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.093477 containerd[1591]: time="2025-11-08T00:21:12.093430691Z" level=error msg="encountered an error cleaning up failed sandbox \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.093563 containerd[1591]: time="2025-11-08T00:21:12.093512722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b65b9d44c-vc5w2,Uid:e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.094088 kubelet[2695]: E1108 00:21:12.093826 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.094088 kubelet[2695]: E1108 00:21:12.093891 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b65b9d44c-vc5w2" Nov 8 00:21:12.094088 kubelet[2695]: E1108 00:21:12.093937 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b65b9d44c-vc5w2" Nov 8 00:21:12.094580 kubelet[2695]: E1108 00:21:12.093981 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b65b9d44c-vc5w2_calico-apiserver(e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b65b9d44c-vc5w2_calico-apiserver(e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b65b9d44c-vc5w2" podUID="e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5" Nov 8 00:21:12.111654 containerd[1591]: time="2025-11-08T00:21:12.111595708Z" level=error msg="Failed to destroy network for sandbox \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.112252 containerd[1591]: time="2025-11-08T00:21:12.112218064Z" level=error msg="Failed to destroy network for sandbox \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.113155 containerd[1591]: time="2025-11-08T00:21:12.112415936Z" level=error msg="encountered an error cleaning up failed sandbox \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.113155 containerd[1591]: time="2025-11-08T00:21:12.112491693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b65b9d44c-ld5bj,Uid:8d5058f1-2a34-4b46-bc5b-60d93e86f9f4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.113493 kubelet[2695]: E1108 00:21:12.112729 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.113493 kubelet[2695]: E1108 00:21:12.112794 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b65b9d44c-ld5bj" Nov 8 00:21:12.113493 kubelet[2695]: E1108 00:21:12.112814 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b65b9d44c-ld5bj" Nov 8 00:21:12.113796 kubelet[2695]: E1108 00:21:12.112877 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b65b9d44c-ld5bj_calico-apiserver(8d5058f1-2a34-4b46-bc5b-60d93e86f9f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b65b9d44c-ld5bj_calico-apiserver(8d5058f1-2a34-4b46-bc5b-60d93e86f9f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b65b9d44c-ld5bj" podUID="8d5058f1-2a34-4b46-bc5b-60d93e86f9f4" Nov 8 00:21:12.113888 containerd[1591]: time="2025-11-08T00:21:12.113618408Z" level=error msg="encountered an error cleaning up failed sandbox \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.114024 containerd[1591]: time="2025-11-08T00:21:12.113971648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5blmv,Uid:aed9a615-02c1-40d6-81ad-65033e8e154c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.114341 kubelet[2695]: E1108 00:21:12.114294 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.114407 kubelet[2695]: E1108 00:21:12.114371 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5blmv" Nov 8 00:21:12.114526 kubelet[2695]: E1108 00:21:12.114406 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5blmv" Nov 8 00:21:12.114526 kubelet[2695]: E1108 00:21:12.114489 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5blmv_kube-system(aed9a615-02c1-40d6-81ad-65033e8e154c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5blmv_kube-system(aed9a615-02c1-40d6-81ad-65033e8e154c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5blmv" podUID="aed9a615-02c1-40d6-81ad-65033e8e154c" Nov 8 00:21:12.423346 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162-shm.mount: Deactivated successfully. Nov 8 00:21:12.507598 kubelet[2695]: I1108 00:21:12.507544 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Nov 8 00:21:12.511663 kubelet[2695]: I1108 00:21:12.511160 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Nov 8 00:21:12.514218 containerd[1591]: time="2025-11-08T00:21:12.513313872Z" level=info msg="StopPodSandbox for \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\"" Nov 8 00:21:12.517401 containerd[1591]: time="2025-11-08T00:21:12.516794170Z" level=info msg="Ensure that sandbox 4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214 in task-service has been cleanup successfully" Nov 8 00:21:12.521291 containerd[1591]: time="2025-11-08T00:21:12.521137976Z" level=info msg="StopPodSandbox for \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\"" Nov 8 00:21:12.521688 containerd[1591]: time="2025-11-08T00:21:12.521348241Z" level=info msg="Ensure that sandbox 1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d in task-service has been cleanup successfully" Nov 8 00:21:12.524299 kubelet[2695]: I1108 00:21:12.524163 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Nov 8 00:21:12.525938 containerd[1591]: time="2025-11-08T00:21:12.525755799Z" level=info msg="StopPodSandbox for \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\"" Nov 8 00:21:12.526388 containerd[1591]: time="2025-11-08T00:21:12.526252856Z" level=info msg="Ensure that sandbox a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd in task-service has been cleanup successfully" Nov 8 00:21:12.531018 kubelet[2695]: I1108 00:21:12.530860 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Nov 8 00:21:12.537337 containerd[1591]: time="2025-11-08T00:21:12.536786945Z" level=info msg="StopPodSandbox for \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\"" Nov 8 00:21:12.539274 containerd[1591]: time="2025-11-08T00:21:12.539231439Z" level=info msg="Ensure that sandbox c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092 in task-service has been cleanup successfully" Nov 8 00:21:12.546223 kubelet[2695]: I1108 00:21:12.545072 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Nov 8 00:21:12.547895 containerd[1591]: time="2025-11-08T00:21:12.547854140Z" level=info msg="StopPodSandbox for \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\"" Nov 8 00:21:12.551539 containerd[1591]: time="2025-11-08T00:21:12.548804407Z" level=info msg="Ensure that sandbox e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945 in task-service has been cleanup successfully" Nov 8 00:21:12.556165 kubelet[2695]: I1108 00:21:12.556129 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Nov 8 00:21:12.559600 containerd[1591]: time="2025-11-08T00:21:12.559557825Z" level=info msg="StopPodSandbox for \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\"" Nov 8 00:21:12.561203 kubelet[2695]: I1108 00:21:12.561101 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Nov 8 00:21:12.562064 containerd[1591]: time="2025-11-08T00:21:12.561554652Z" level=info msg="Ensure that sandbox c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0 in task-service has been cleanup successfully" Nov 8 00:21:12.571002 containerd[1591]: time="2025-11-08T00:21:12.570956490Z" level=info msg="StopPodSandbox for \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\"" Nov 8 00:21:12.578509 kubelet[2695]: I1108 00:21:12.578447 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Nov 8 00:21:12.596873 containerd[1591]: time="2025-11-08T00:21:12.596648731Z" level=info msg="StopPodSandbox for \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\"" Nov 8 00:21:12.598633 containerd[1591]: time="2025-11-08T00:21:12.598449050Z" level=info msg="Ensure that sandbox edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162 in task-service has been cleanup successfully" Nov 8 00:21:12.600744 containerd[1591]: time="2025-11-08T00:21:12.600622883Z" level=info msg="Ensure that sandbox 26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a in task-service has been cleanup successfully" Nov 8 00:21:12.670307 containerd[1591]: time="2025-11-08T00:21:12.670256916Z" level=error msg="StopPodSandbox for \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\" failed" error="failed to destroy network for sandbox \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.670950 containerd[1591]: time="2025-11-08T00:21:12.670488703Z" level=error msg="StopPodSandbox for \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\" failed" error="failed to destroy network for sandbox \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.671003 kubelet[2695]: E1108 00:21:12.670531 2695 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Nov 8 00:21:12.671003 kubelet[2695]: E1108 00:21:12.670598 2695 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092"} Nov 8 00:21:12.671003 kubelet[2695]: E1108 00:21:12.670660 2695 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5b85782e-ef51-43c2-92d6-7721ec39bac1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:12.671003 kubelet[2695]: E1108 00:21:12.670683 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5b85782e-ef51-43c2-92d6-7721ec39bac1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cbp7m" podUID="5b85782e-ef51-43c2-92d6-7721ec39bac1" Nov 8 00:21:12.671180 kubelet[2695]: E1108 00:21:12.670744 2695 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Nov 8 00:21:12.671180 kubelet[2695]: E1108 00:21:12.670768 2695 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d"} Nov 8 00:21:12.671180 kubelet[2695]: E1108 00:21:12.670788 2695 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aed9a615-02c1-40d6-81ad-65033e8e154c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:12.671180 kubelet[2695]: E1108 00:21:12.670805 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aed9a615-02c1-40d6-81ad-65033e8e154c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5blmv" podUID="aed9a615-02c1-40d6-81ad-65033e8e154c" Nov 8 00:21:12.676525 containerd[1591]: time="2025-11-08T00:21:12.675799955Z" level=error msg="StopPodSandbox for \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\" failed" error="failed to destroy network for sandbox \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.676636 kubelet[2695]: E1108 00:21:12.676040 2695 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Nov 8 00:21:12.676636 kubelet[2695]: E1108 00:21:12.676104 2695 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214"} Nov 8 00:21:12.676636 kubelet[2695]: E1108 00:21:12.676139 2695 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6e197bac-6071-4052-8e5a-3a64d2035a47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:12.676636 kubelet[2695]: E1108 00:21:12.676163 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6e197bac-6071-4052-8e5a-3a64d2035a47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-gs456" podUID="6e197bac-6071-4052-8e5a-3a64d2035a47" Nov 8 00:21:12.692495 containerd[1591]: time="2025-11-08T00:21:12.692396050Z" level=error msg="StopPodSandbox for \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\" failed" error="failed to destroy network for sandbox \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.693171 kubelet[2695]: E1108 00:21:12.692803 2695 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Nov 8 00:21:12.693171 kubelet[2695]: E1108 00:21:12.692881 2695 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0"} Nov 8 00:21:12.693171 kubelet[2695]: E1108 00:21:12.692930 2695 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8d5058f1-2a34-4b46-bc5b-60d93e86f9f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:12.693171 kubelet[2695]: E1108 00:21:12.692964 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8d5058f1-2a34-4b46-bc5b-60d93e86f9f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b65b9d44c-ld5bj" podUID="8d5058f1-2a34-4b46-bc5b-60d93e86f9f4" Nov 8 00:21:12.711961 containerd[1591]: time="2025-11-08T00:21:12.711802084Z" level=error msg="StopPodSandbox for \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\" failed" error="failed to destroy network for sandbox \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.712694 kubelet[2695]: E1108 00:21:12.712549 2695 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Nov 8 00:21:12.712694 kubelet[2695]: E1108 00:21:12.712605 2695 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd"} Nov 8 00:21:12.712694 kubelet[2695]: E1108 00:21:12.712642 2695 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:12.712694 kubelet[2695]: E1108 00:21:12.712667 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b65b9d44c-vc5w2" podUID="e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5" Nov 8 00:21:12.719923 containerd[1591]: time="2025-11-08T00:21:12.719856726Z" level=error msg="StopPodSandbox for \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\" failed" error="failed to destroy network for sandbox \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.720391 kubelet[2695]: E1108 00:21:12.720148 2695 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Nov 8 00:21:12.720391 kubelet[2695]: E1108 00:21:12.720205 2695 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162"} Nov 8 00:21:12.720391 kubelet[2695]: E1108 00:21:12.720247 2695 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:12.720391 kubelet[2695]: E1108 00:21:12.720272 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2mnck" podUID="f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384" Nov 8 00:21:12.732877 containerd[1591]: time="2025-11-08T00:21:12.732784178Z" level=error msg="StopPodSandbox for \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\" failed" error="failed to destroy network for sandbox \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.733443 kubelet[2695]: E1108 00:21:12.733176 2695 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Nov 8 00:21:12.733443 kubelet[2695]: E1108 00:21:12.733253 2695 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945"} Nov 8 00:21:12.733443 kubelet[2695]: E1108 00:21:12.733302 2695 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"571339fa-a980-4274-be42-77b940705c5d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:12.733443 kubelet[2695]: E1108 00:21:12.733336 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"571339fa-a980-4274-be42-77b940705c5d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78689fc948-mm7k2" podUID="571339fa-a980-4274-be42-77b940705c5d" Nov 8 00:21:12.737895 containerd[1591]: time="2025-11-08T00:21:12.737794243Z" level=error msg="StopPodSandbox for \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\" failed" error="failed to destroy network for sandbox \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:12.738407 kubelet[2695]: E1108 00:21:12.738237 2695 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Nov 8 00:21:12.738407 kubelet[2695]: E1108 00:21:12.738309 2695 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a"} Nov 8 00:21:12.738407 kubelet[2695]: E1108 00:21:12.738351 2695 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5fbe194b-f5c0-4f62-87a6-c191b1791ac3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:12.738407 kubelet[2695]: E1108 00:21:12.738375 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5fbe194b-f5c0-4f62-87a6-c191b1791ac3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6478bcb995-55zjb" podUID="5fbe194b-f5c0-4f62-87a6-c191b1791ac3" Nov 8 00:21:19.031194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3051834953.mount: Deactivated successfully. Nov 8 00:21:19.202041 containerd[1591]: time="2025-11-08T00:21:19.194737655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:21:19.211236 containerd[1591]: time="2025-11-08T00:21:19.211160440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:19.217373 containerd[1591]: time="2025-11-08T00:21:19.215997979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.669533105s" Nov 8 00:21:19.217373 containerd[1591]: time="2025-11-08T00:21:19.216074525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:21:19.250176 containerd[1591]: time="2025-11-08T00:21:19.248072253Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:19.281760 containerd[1591]: time="2025-11-08T00:21:19.281598039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:19.426713 containerd[1591]: time="2025-11-08T00:21:19.426580255Z" level=info msg="CreateContainer within sandbox \"f5c0c67426191ee67f9bbee5bc066b0320345c2ed8f5265b0e919e36a0856490\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:21:19.632838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1861655092.mount: Deactivated successfully. Nov 8 00:21:19.669622 containerd[1591]: time="2025-11-08T00:21:19.669520399Z" level=info msg="CreateContainer within sandbox \"f5c0c67426191ee67f9bbee5bc066b0320345c2ed8f5265b0e919e36a0856490\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ab778d04026f8945bc175efbad488f9a0bc6129bc3ee2f8b5a85e42cc39ece3c\"" Nov 8 00:21:19.671988 containerd[1591]: time="2025-11-08T00:21:19.671935563Z" level=info msg="StartContainer for \"ab778d04026f8945bc175efbad488f9a0bc6129bc3ee2f8b5a85e42cc39ece3c\"" Nov 8 00:21:19.830407 containerd[1591]: time="2025-11-08T00:21:19.830286882Z" level=info msg="StartContainer for \"ab778d04026f8945bc175efbad488f9a0bc6129bc3ee2f8b5a85e42cc39ece3c\" returns successfully" Nov 8 00:21:19.949063 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:21:19.951514 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:21:20.193104 containerd[1591]: time="2025-11-08T00:21:20.191725939Z" level=info msg="StopPodSandbox for \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\"" Nov 8 00:21:20.338943 systemd-journald[1145]: Under memory pressure, flushing caches. Nov 8 00:21:20.335872 systemd-resolved[1484]: Under memory pressure, flushing caches. Nov 8 00:21:20.335990 systemd-resolved[1484]: Flushed all caches. Nov 8 00:21:20.591670 containerd[1591]: 2025-11-08 00:21:20.322 [INFO][3913] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Nov 8 00:21:20.591670 containerd[1591]: 2025-11-08 00:21:20.323 [INFO][3913] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" iface="eth0" netns="/var/run/netns/cni-fd623bab-500a-c3f7-2b0c-8a4f8a57af84" Nov 8 00:21:20.591670 containerd[1591]: 2025-11-08 00:21:20.324 [INFO][3913] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" iface="eth0" netns="/var/run/netns/cni-fd623bab-500a-c3f7-2b0c-8a4f8a57af84" Nov 8 00:21:20.591670 containerd[1591]: 2025-11-08 00:21:20.325 [INFO][3913] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" iface="eth0" netns="/var/run/netns/cni-fd623bab-500a-c3f7-2b0c-8a4f8a57af84" Nov 8 00:21:20.591670 containerd[1591]: 2025-11-08 00:21:20.325 [INFO][3913] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Nov 8 00:21:20.591670 containerd[1591]: 2025-11-08 00:21:20.325 [INFO][3913] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Nov 8 00:21:20.591670 containerd[1591]: 2025-11-08 00:21:20.559 [INFO][3921] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" HandleID="k8s-pod-network.26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--6478bcb995--55zjb-eth0" Nov 8 00:21:20.591670 containerd[1591]: 2025-11-08 00:21:20.561 [INFO][3921] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:20.591670 containerd[1591]: 2025-11-08 00:21:20.562 [INFO][3921] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:20.591670 containerd[1591]: 2025-11-08 00:21:20.582 [WARNING][3921] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" HandleID="k8s-pod-network.26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--6478bcb995--55zjb-eth0" Nov 8 00:21:20.591670 containerd[1591]: 2025-11-08 00:21:20.582 [INFO][3921] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" HandleID="k8s-pod-network.26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--6478bcb995--55zjb-eth0" Nov 8 00:21:20.591670 containerd[1591]: 2025-11-08 00:21:20.585 [INFO][3921] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:20.591670 containerd[1591]: 2025-11-08 00:21:20.588 [INFO][3913] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Nov 8 00:21:20.595156 containerd[1591]: time="2025-11-08T00:21:20.594775485Z" level=info msg="TearDown network for sandbox \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\" successfully" Nov 8 00:21:20.595156 containerd[1591]: time="2025-11-08T00:21:20.595021983Z" level=info msg="StopPodSandbox for \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\" returns successfully" Nov 8 00:21:20.598973 systemd[1]: run-netns-cni\x2dfd623bab\x2d500a\x2dc3f7\x2d2b0c\x2d8a4f8a57af84.mount: Deactivated successfully. Nov 8 00:21:20.640056 kubelet[2695]: I1108 00:21:20.639421 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5fbe194b-f5c0-4f62-87a6-c191b1791ac3-whisker-backend-key-pair\") pod \"5fbe194b-f5c0-4f62-87a6-c191b1791ac3\" (UID: \"5fbe194b-f5c0-4f62-87a6-c191b1791ac3\") " Nov 8 00:21:20.640056 kubelet[2695]: I1108 00:21:20.639565 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fbe194b-f5c0-4f62-87a6-c191b1791ac3-whisker-ca-bundle\") pod \"5fbe194b-f5c0-4f62-87a6-c191b1791ac3\" (UID: \"5fbe194b-f5c0-4f62-87a6-c191b1791ac3\") " Nov 8 00:21:20.640056 kubelet[2695]: I1108 00:21:20.639607 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh8fq\" (UniqueName: \"kubernetes.io/projected/5fbe194b-f5c0-4f62-87a6-c191b1791ac3-kube-api-access-zh8fq\") pod \"5fbe194b-f5c0-4f62-87a6-c191b1791ac3\" (UID: \"5fbe194b-f5c0-4f62-87a6-c191b1791ac3\") " Nov 8 00:21:20.658105 systemd[1]: var-lib-kubelet-pods-5fbe194b\x2df5c0\x2d4f62\x2d87a6\x2dc191b1791ac3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:21:20.664589 systemd[1]: var-lib-kubelet-pods-5fbe194b\x2df5c0\x2d4f62\x2d87a6\x2dc191b1791ac3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzh8fq.mount: Deactivated successfully. Nov 8 00:21:20.665656 kubelet[2695]: I1108 00:21:20.664104 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fbe194b-f5c0-4f62-87a6-c191b1791ac3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5fbe194b-f5c0-4f62-87a6-c191b1791ac3" (UID: "5fbe194b-f5c0-4f62-87a6-c191b1791ac3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:21:20.665656 kubelet[2695]: I1108 00:21:20.665320 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fbe194b-f5c0-4f62-87a6-c191b1791ac3-kube-api-access-zh8fq" (OuterVolumeSpecName: "kube-api-access-zh8fq") pod "5fbe194b-f5c0-4f62-87a6-c191b1791ac3" (UID: "5fbe194b-f5c0-4f62-87a6-c191b1791ac3"). InnerVolumeSpecName "kube-api-access-zh8fq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:21:20.666528 kubelet[2695]: I1108 00:21:20.664500 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fbe194b-f5c0-4f62-87a6-c191b1791ac3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5fbe194b-f5c0-4f62-87a6-c191b1791ac3" (UID: "5fbe194b-f5c0-4f62-87a6-c191b1791ac3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:21:20.671037 kubelet[2695]: E1108 00:21:20.669156 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:20.735491 kubelet[2695]: I1108 00:21:20.730822 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2dmbl" podStartSLOduration=2.914937904 podStartE2EDuration="20.724562618s" podCreationTimestamp="2025-11-08 00:21:00 +0000 UTC" firstStartedPulling="2025-11-08 00:21:01.43949855 +0000 UTC m=+22.356266766" lastFinishedPulling="2025-11-08 00:21:19.249123268 +0000 UTC m=+40.165891480" observedRunningTime="2025-11-08 00:21:20.7037971 +0000 UTC m=+41.620565334" watchObservedRunningTime="2025-11-08 00:21:20.724562618 +0000 UTC m=+41.641330855" Nov 8 00:21:20.743755 kubelet[2695]: I1108 00:21:20.743175 2695 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5fbe194b-f5c0-4f62-87a6-c191b1791ac3-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-01b3a4b0a8\" DevicePath \"\"" Nov 8 00:21:20.743755 kubelet[2695]: I1108 00:21:20.743209 2695 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fbe194b-f5c0-4f62-87a6-c191b1791ac3-whisker-ca-bundle\") on node \"ci-4081.3.6-n-01b3a4b0a8\" DevicePath \"\"" Nov 8 00:21:20.743755 kubelet[2695]: I1108 00:21:20.743221 2695 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zh8fq\" (UniqueName: \"kubernetes.io/projected/5fbe194b-f5c0-4f62-87a6-c191b1791ac3-kube-api-access-zh8fq\") on node \"ci-4081.3.6-n-01b3a4b0a8\" DevicePath \"\"" Nov 8 00:21:20.846721 kubelet[2695]: I1108 00:21:20.846404 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddv98\" (UniqueName: \"kubernetes.io/projected/a194daac-f83a-4a21-ba16-72b7bfe8925b-kube-api-access-ddv98\") pod \"whisker-74db99b9f5-n8j6t\" (UID: \"a194daac-f83a-4a21-ba16-72b7bfe8925b\") " pod="calico-system/whisker-74db99b9f5-n8j6t" Nov 8 00:21:20.846721 kubelet[2695]: I1108 00:21:20.846535 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a194daac-f83a-4a21-ba16-72b7bfe8925b-whisker-ca-bundle\") pod \"whisker-74db99b9f5-n8j6t\" (UID: \"a194daac-f83a-4a21-ba16-72b7bfe8925b\") " pod="calico-system/whisker-74db99b9f5-n8j6t" Nov 8 00:21:20.846721 kubelet[2695]: I1108 00:21:20.846569 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a194daac-f83a-4a21-ba16-72b7bfe8925b-whisker-backend-key-pair\") pod \"whisker-74db99b9f5-n8j6t\" (UID: \"a194daac-f83a-4a21-ba16-72b7bfe8925b\") " pod="calico-system/whisker-74db99b9f5-n8j6t" Nov 8 00:21:21.088477 containerd[1591]: time="2025-11-08T00:21:21.087957357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74db99b9f5-n8j6t,Uid:a194daac-f83a-4a21-ba16-72b7bfe8925b,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:21.283927 systemd-networkd[1222]: cali37f18eea586: Link UP Nov 8 00:21:21.284094 systemd-networkd[1222]: cali37f18eea586: Gained carrier Nov 8 00:21:21.308999 containerd[1591]: 2025-11-08 00:21:21.165 [INFO][3942] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:21:21.308999 containerd[1591]: 2025-11-08 00:21:21.179 [INFO][3942] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--74db99b9f5--n8j6t-eth0 whisker-74db99b9f5- calico-system a194daac-f83a-4a21-ba16-72b7bfe8925b 911 0 2025-11-08 00:21:20 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74db99b9f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-01b3a4b0a8 whisker-74db99b9f5-n8j6t eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali37f18eea586 [] [] }} ContainerID="95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f" Namespace="calico-system" Pod="whisker-74db99b9f5-n8j6t" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--74db99b9f5--n8j6t-" Nov 8 00:21:21.308999 containerd[1591]: 2025-11-08 00:21:21.180 [INFO][3942] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f" Namespace="calico-system" Pod="whisker-74db99b9f5-n8j6t" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--74db99b9f5--n8j6t-eth0" Nov 8 00:21:21.308999 containerd[1591]: 2025-11-08 00:21:21.219 [INFO][3954] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f" HandleID="k8s-pod-network.95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--74db99b9f5--n8j6t-eth0" Nov 8 00:21:21.308999 containerd[1591]: 2025-11-08 00:21:21.222 [INFO][3954] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f" HandleID="k8s-pod-network.95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--74db99b9f5--n8j6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-01b3a4b0a8", "pod":"whisker-74db99b9f5-n8j6t", "timestamp":"2025-11-08 00:21:21.219858406 +0000 UTC"}, Hostname:"ci-4081.3.6-n-01b3a4b0a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:21.308999 containerd[1591]: 2025-11-08 00:21:21.222 [INFO][3954] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:21.308999 containerd[1591]: 2025-11-08 00:21:21.223 [INFO][3954] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:21.308999 containerd[1591]: 2025-11-08 00:21:21.223 [INFO][3954] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-01b3a4b0a8' Nov 8 00:21:21.308999 containerd[1591]: 2025-11-08 00:21:21.233 [INFO][3954] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:21.308999 containerd[1591]: 2025-11-08 00:21:21.242 [INFO][3954] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:21.308999 containerd[1591]: 2025-11-08 00:21:21.247 [INFO][3954] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:21.308999 containerd[1591]: 2025-11-08 00:21:21.250 [INFO][3954] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:21.308999 containerd[1591]: 2025-11-08 00:21:21.252 [INFO][3954] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:21.308999 containerd[1591]: 2025-11-08 00:21:21.252 [INFO][3954] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:21.308999 containerd[1591]: 2025-11-08 00:21:21.254 [INFO][3954] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f Nov 8 00:21:21.308999 containerd[1591]: 2025-11-08 00:21:21.258 [INFO][3954] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:21.308999 containerd[1591]: 2025-11-08 00:21:21.264 [INFO][3954] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.65/26] block=192.168.38.64/26 handle="k8s-pod-network.95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:21.308999 containerd[1591]: 2025-11-08 00:21:21.264 [INFO][3954] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.65/26] handle="k8s-pod-network.95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:21.308999 containerd[1591]: 2025-11-08 00:21:21.265 [INFO][3954] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:21.308999 containerd[1591]: 2025-11-08 00:21:21.265 [INFO][3954] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.65/26] IPv6=[] ContainerID="95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f" HandleID="k8s-pod-network.95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--74db99b9f5--n8j6t-eth0" Nov 8 00:21:21.310194 containerd[1591]: 2025-11-08 00:21:21.269 [INFO][3942] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f" Namespace="calico-system" Pod="whisker-74db99b9f5-n8j6t" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--74db99b9f5--n8j6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--74db99b9f5--n8j6t-eth0", GenerateName:"whisker-74db99b9f5-", Namespace:"calico-system", SelfLink:"", UID:"a194daac-f83a-4a21-ba16-72b7bfe8925b", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74db99b9f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"", Pod:"whisker-74db99b9f5-n8j6t", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.38.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali37f18eea586", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:21.310194 containerd[1591]: 2025-11-08 00:21:21.269 [INFO][3942] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.65/32] ContainerID="95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f" Namespace="calico-system" Pod="whisker-74db99b9f5-n8j6t" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--74db99b9f5--n8j6t-eth0" Nov 8 00:21:21.310194 containerd[1591]: 2025-11-08 00:21:21.269 [INFO][3942] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37f18eea586 ContainerID="95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f" Namespace="calico-system" Pod="whisker-74db99b9f5-n8j6t" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--74db99b9f5--n8j6t-eth0" Nov 8 00:21:21.310194 containerd[1591]: 2025-11-08 00:21:21.280 [INFO][3942] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f" Namespace="calico-system" Pod="whisker-74db99b9f5-n8j6t" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--74db99b9f5--n8j6t-eth0" Nov 8 00:21:21.310194 containerd[1591]: 2025-11-08 00:21:21.280 [INFO][3942] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f" Namespace="calico-system" Pod="whisker-74db99b9f5-n8j6t" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--74db99b9f5--n8j6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--74db99b9f5--n8j6t-eth0", GenerateName:"whisker-74db99b9f5-", Namespace:"calico-system", SelfLink:"", UID:"a194daac-f83a-4a21-ba16-72b7bfe8925b", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74db99b9f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f", Pod:"whisker-74db99b9f5-n8j6t", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.38.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali37f18eea586", MAC:"8e:19:1f:35:93:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:21.310194 containerd[1591]: 2025-11-08 00:21:21.303 [INFO][3942] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f" Namespace="calico-system" Pod="whisker-74db99b9f5-n8j6t" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--74db99b9f5--n8j6t-eth0" Nov 8 00:21:21.323311 kubelet[2695]: I1108 00:21:21.323263 2695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fbe194b-f5c0-4f62-87a6-c191b1791ac3" path="/var/lib/kubelet/pods/5fbe194b-f5c0-4f62-87a6-c191b1791ac3/volumes" Nov 8 00:21:21.345061 containerd[1591]: time="2025-11-08T00:21:21.344931653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:21.345498 containerd[1591]: time="2025-11-08T00:21:21.345118847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:21.345980 containerd[1591]: time="2025-11-08T00:21:21.345929836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:21.346480 containerd[1591]: time="2025-11-08T00:21:21.346422127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:21.440066 containerd[1591]: time="2025-11-08T00:21:21.440020957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74db99b9f5-n8j6t,Uid:a194daac-f83a-4a21-ba16-72b7bfe8925b,Namespace:calico-system,Attempt:0,} returns sandbox id \"95cbd99a63a001d298a8db59e29b2138d3c9a3402ae035872c35dd04daa9453f\"" Nov 8 00:21:21.448760 containerd[1591]: time="2025-11-08T00:21:21.448719958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:21:21.691621 kubelet[2695]: E1108 00:21:21.691117 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:21.858475 containerd[1591]: time="2025-11-08T00:21:21.858408715Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:21.880519 containerd[1591]: time="2025-11-08T00:21:21.865478702Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:21:21.880519 containerd[1591]: time="2025-11-08T00:21:21.865889417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:21:21.881086 kubelet[2695]: E1108 00:21:21.878718 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:21:21.881086 kubelet[2695]: E1108 00:21:21.879363 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:21:21.886701 kubelet[2695]: E1108 00:21:21.886261 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c0f4f730874f4da2b1ae525f279b9089,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ddv98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74db99b9f5-n8j6t_calico-system(a194daac-f83a-4a21-ba16-72b7bfe8925b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:21.890661 containerd[1591]: time="2025-11-08T00:21:21.890548540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:21:22.122562 kernel: bpftool[4148]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:21:22.231026 containerd[1591]: time="2025-11-08T00:21:22.230887237Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:22.231649 containerd[1591]: time="2025-11-08T00:21:22.231615258Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:21:22.231736 containerd[1591]: time="2025-11-08T00:21:22.231706891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:21:22.232666 kubelet[2695]: E1108 00:21:22.232617 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:21:22.232859 kubelet[2695]: E1108 00:21:22.232672 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:21:22.232960 kubelet[2695]: E1108 00:21:22.232792 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddv98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74db99b9f5-n8j6t_calico-system(a194daac-f83a-4a21-ba16-72b7bfe8925b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:22.240962 kubelet[2695]: E1108 00:21:22.240792 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74db99b9f5-n8j6t" podUID="a194daac-f83a-4a21-ba16-72b7bfe8925b" Nov 8 00:21:22.383863 systemd-resolved[1484]: Under memory pressure, flushing caches. Nov 8 00:21:22.386729 systemd-journald[1145]: Under memory pressure, flushing caches. Nov 8 00:21:22.383886 systemd-resolved[1484]: Flushed all caches. Nov 8 00:21:22.390034 systemd-networkd[1222]: cali37f18eea586: Gained IPv6LL Nov 8 00:21:22.428644 systemd-networkd[1222]: vxlan.calico: Link UP Nov 8 00:21:22.428652 systemd-networkd[1222]: vxlan.calico: Gained carrier Nov 8 00:21:22.691029 kubelet[2695]: E1108 00:21:22.690951 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:22.699039 kubelet[2695]: E1108 00:21:22.698987 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74db99b9f5-n8j6t" podUID="a194daac-f83a-4a21-ba16-72b7bfe8925b" Nov 8 00:21:22.767302 systemd[1]: run-containerd-runc-k8s.io-ab778d04026f8945bc175efbad488f9a0bc6129bc3ee2f8b5a85e42cc39ece3c-runc.a9XoP5.mount: Deactivated successfully. Nov 8 00:21:23.471700 systemd-networkd[1222]: vxlan.calico: Gained IPv6LL Nov 8 00:21:24.309155 containerd[1591]: time="2025-11-08T00:21:24.308785188Z" level=info msg="StopPodSandbox for \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\"" Nov 8 00:21:24.309610 containerd[1591]: time="2025-11-08T00:21:24.309478318Z" level=info msg="StopPodSandbox for \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\"" Nov 8 00:21:24.313177 containerd[1591]: time="2025-11-08T00:21:24.312136149Z" level=info msg="StopPodSandbox for \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\"" Nov 8 00:21:24.313537 containerd[1591]: time="2025-11-08T00:21:24.313504058Z" level=info msg="StopPodSandbox for \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\"" Nov 8 00:21:24.572092 containerd[1591]: 2025-11-08 00:21:24.449 [INFO][4283] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Nov 8 00:21:24.572092 containerd[1591]: 2025-11-08 00:21:24.449 [INFO][4283] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" iface="eth0" netns="/var/run/netns/cni-836581c4-2ebe-1b5e-b07a-1be2f46d209a" Nov 8 00:21:24.572092 containerd[1591]: 2025-11-08 00:21:24.450 [INFO][4283] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" iface="eth0" netns="/var/run/netns/cni-836581c4-2ebe-1b5e-b07a-1be2f46d209a" Nov 8 00:21:24.572092 containerd[1591]: 2025-11-08 00:21:24.451 [INFO][4283] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" iface="eth0" netns="/var/run/netns/cni-836581c4-2ebe-1b5e-b07a-1be2f46d209a" Nov 8 00:21:24.572092 containerd[1591]: 2025-11-08 00:21:24.451 [INFO][4283] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Nov 8 00:21:24.572092 containerd[1591]: 2025-11-08 00:21:24.451 [INFO][4283] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Nov 8 00:21:24.572092 containerd[1591]: 2025-11-08 00:21:24.542 [INFO][4307] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" HandleID="k8s-pod-network.4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0" Nov 8 00:21:24.572092 containerd[1591]: 2025-11-08 00:21:24.542 [INFO][4307] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:24.572092 containerd[1591]: 2025-11-08 00:21:24.542 [INFO][4307] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:24.572092 containerd[1591]: 2025-11-08 00:21:24.557 [WARNING][4307] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" HandleID="k8s-pod-network.4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0" Nov 8 00:21:24.572092 containerd[1591]: 2025-11-08 00:21:24.557 [INFO][4307] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" HandleID="k8s-pod-network.4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0" Nov 8 00:21:24.572092 containerd[1591]: 2025-11-08 00:21:24.560 [INFO][4307] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:24.572092 containerd[1591]: 2025-11-08 00:21:24.564 [INFO][4283] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Nov 8 00:21:24.575584 containerd[1591]: time="2025-11-08T00:21:24.575542854Z" level=info msg="TearDown network for sandbox \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\" successfully" Nov 8 00:21:24.575584 containerd[1591]: time="2025-11-08T00:21:24.575579723Z" level=info msg="StopPodSandbox for \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\" returns successfully" Nov 8 00:21:24.578654 containerd[1591]: time="2025-11-08T00:21:24.578614667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gs456,Uid:6e197bac-6071-4052-8e5a-3a64d2035a47,Namespace:calico-system,Attempt:1,}" Nov 8 00:21:24.582087 systemd[1]: run-netns-cni\x2d836581c4\x2d2ebe\x2d1b5e\x2db07a\x2d1be2f46d209a.mount: Deactivated successfully. Nov 8 00:21:24.611206 containerd[1591]: 2025-11-08 00:21:24.452 [INFO][4278] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Nov 8 00:21:24.611206 containerd[1591]: 2025-11-08 00:21:24.455 [INFO][4278] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" iface="eth0" netns="/var/run/netns/cni-cdfb35c7-380e-8ea1-65fa-f7c0e651ef13" Nov 8 00:21:24.611206 containerd[1591]: 2025-11-08 00:21:24.457 [INFO][4278] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" iface="eth0" netns="/var/run/netns/cni-cdfb35c7-380e-8ea1-65fa-f7c0e651ef13" Nov 8 00:21:24.611206 containerd[1591]: 2025-11-08 00:21:24.458 [INFO][4278] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" iface="eth0" netns="/var/run/netns/cni-cdfb35c7-380e-8ea1-65fa-f7c0e651ef13" Nov 8 00:21:24.611206 containerd[1591]: 2025-11-08 00:21:24.458 [INFO][4278] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Nov 8 00:21:24.611206 containerd[1591]: 2025-11-08 00:21:24.463 [INFO][4278] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Nov 8 00:21:24.611206 containerd[1591]: 2025-11-08 00:21:24.547 [INFO][4312] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" HandleID="k8s-pod-network.a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0" Nov 8 00:21:24.611206 containerd[1591]: 2025-11-08 00:21:24.547 [INFO][4312] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:24.611206 containerd[1591]: 2025-11-08 00:21:24.560 [INFO][4312] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:24.611206 containerd[1591]: 2025-11-08 00:21:24.570 [WARNING][4312] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" HandleID="k8s-pod-network.a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0" Nov 8 00:21:24.611206 containerd[1591]: 2025-11-08 00:21:24.570 [INFO][4312] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" HandleID="k8s-pod-network.a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0" Nov 8 00:21:24.611206 containerd[1591]: 2025-11-08 00:21:24.579 [INFO][4312] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:24.611206 containerd[1591]: 2025-11-08 00:21:24.591 [INFO][4278] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Nov 8 00:21:24.613757 containerd[1591]: time="2025-11-08T00:21:24.611675528Z" level=info msg="TearDown network for sandbox \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\" successfully" Nov 8 00:21:24.613757 containerd[1591]: time="2025-11-08T00:21:24.611713987Z" level=info msg="StopPodSandbox for \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\" returns successfully" Nov 8 00:21:24.612612 systemd[1]: run-netns-cni\x2dcdfb35c7\x2d380e\x2d8ea1\x2d65fa\x2df7c0e651ef13.mount: Deactivated successfully. Nov 8 00:21:24.618970 containerd[1591]: time="2025-11-08T00:21:24.617863183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b65b9d44c-vc5w2,Uid:e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:21:24.619495 containerd[1591]: 2025-11-08 00:21:24.479 [INFO][4282] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Nov 8 00:21:24.619495 containerd[1591]: 2025-11-08 00:21:24.480 [INFO][4282] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" iface="eth0" netns="/var/run/netns/cni-03e68467-9c51-ae9d-0a6d-0cccba81a4f6" Nov 8 00:21:24.619495 containerd[1591]: 2025-11-08 00:21:24.480 [INFO][4282] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" iface="eth0" netns="/var/run/netns/cni-03e68467-9c51-ae9d-0a6d-0cccba81a4f6" Nov 8 00:21:24.619495 containerd[1591]: 2025-11-08 00:21:24.481 [INFO][4282] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" iface="eth0" netns="/var/run/netns/cni-03e68467-9c51-ae9d-0a6d-0cccba81a4f6" Nov 8 00:21:24.619495 containerd[1591]: 2025-11-08 00:21:24.481 [INFO][4282] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Nov 8 00:21:24.619495 containerd[1591]: 2025-11-08 00:21:24.481 [INFO][4282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Nov 8 00:21:24.619495 containerd[1591]: 2025-11-08 00:21:24.558 [INFO][4317] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" HandleID="k8s-pod-network.1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0" Nov 8 00:21:24.619495 containerd[1591]: 2025-11-08 00:21:24.560 [INFO][4317] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:24.619495 containerd[1591]: 2025-11-08 00:21:24.580 [INFO][4317] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:24.619495 containerd[1591]: 2025-11-08 00:21:24.594 [WARNING][4317] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" HandleID="k8s-pod-network.1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0" Nov 8 00:21:24.619495 containerd[1591]: 2025-11-08 00:21:24.594 [INFO][4317] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" HandleID="k8s-pod-network.1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0" Nov 8 00:21:24.619495 containerd[1591]: 2025-11-08 00:21:24.597 [INFO][4317] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:24.619495 containerd[1591]: 2025-11-08 00:21:24.602 [INFO][4282] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Nov 8 00:21:24.622959 containerd[1591]: time="2025-11-08T00:21:24.621682892Z" level=info msg="TearDown network for sandbox \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\" successfully" Nov 8 00:21:24.622959 containerd[1591]: time="2025-11-08T00:21:24.621747638Z" level=info msg="StopPodSandbox for \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\" returns successfully" Nov 8 00:21:24.623113 kubelet[2695]: E1108 00:21:24.622478 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:24.627426 systemd[1]: run-netns-cni\x2d03e68467\x2d9c51\x2dae9d\x2d0a6d\x2d0cccba81a4f6.mount: Deactivated successfully. Nov 8 00:21:24.631432 containerd[1591]: time="2025-11-08T00:21:24.631369841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5blmv,Uid:aed9a615-02c1-40d6-81ad-65033e8e154c,Namespace:kube-system,Attempt:1,}" Nov 8 00:21:24.639503 containerd[1591]: 2025-11-08 00:21:24.482 [INFO][4287] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Nov 8 00:21:24.639503 containerd[1591]: 2025-11-08 00:21:24.483 [INFO][4287] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" iface="eth0" netns="/var/run/netns/cni-77849f52-1712-8f67-4c0d-0f00846a68ed" Nov 8 00:21:24.639503 containerd[1591]: 2025-11-08 00:21:24.483 [INFO][4287] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" iface="eth0" netns="/var/run/netns/cni-77849f52-1712-8f67-4c0d-0f00846a68ed" Nov 8 00:21:24.639503 containerd[1591]: 2025-11-08 00:21:24.483 [INFO][4287] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" iface="eth0" netns="/var/run/netns/cni-77849f52-1712-8f67-4c0d-0f00846a68ed" Nov 8 00:21:24.639503 containerd[1591]: 2025-11-08 00:21:24.483 [INFO][4287] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Nov 8 00:21:24.639503 containerd[1591]: 2025-11-08 00:21:24.483 [INFO][4287] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Nov 8 00:21:24.639503 containerd[1591]: 2025-11-08 00:21:24.590 [INFO][4322] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" HandleID="k8s-pod-network.c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0" Nov 8 00:21:24.639503 containerd[1591]: 2025-11-08 00:21:24.590 [INFO][4322] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:24.639503 containerd[1591]: 2025-11-08 00:21:24.597 [INFO][4322] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:24.639503 containerd[1591]: 2025-11-08 00:21:24.626 [WARNING][4322] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" HandleID="k8s-pod-network.c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0" Nov 8 00:21:24.639503 containerd[1591]: 2025-11-08 00:21:24.626 [INFO][4322] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" HandleID="k8s-pod-network.c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0" Nov 8 00:21:24.639503 containerd[1591]: 2025-11-08 00:21:24.629 [INFO][4322] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:24.639503 containerd[1591]: 2025-11-08 00:21:24.636 [INFO][4287] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Nov 8 00:21:24.640230 containerd[1591]: time="2025-11-08T00:21:24.639713719Z" level=info msg="TearDown network for sandbox \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\" successfully" Nov 8 00:21:24.640230 containerd[1591]: time="2025-11-08T00:21:24.639859989Z" level=info msg="StopPodSandbox for \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\" returns successfully" Nov 8 00:21:24.649368 containerd[1591]: time="2025-11-08T00:21:24.649008824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b65b9d44c-ld5bj,Uid:8d5058f1-2a34-4b46-bc5b-60d93e86f9f4,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:21:24.877808 systemd-networkd[1222]: califa7ce56622c: Link UP Nov 8 00:21:24.878237 systemd-networkd[1222]: califa7ce56622c: Gained carrier Nov 8 00:21:24.904169 containerd[1591]: 2025-11-08 00:21:24.738 [INFO][4345] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0 coredns-668d6bf9bc- kube-system aed9a615-02c1-40d6-81ad-65033e8e154c 943 0 2025-11-08 00:20:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-01b3a4b0a8 coredns-668d6bf9bc-5blmv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califa7ce56622c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-5blmv" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-" Nov 8 00:21:24.904169 containerd[1591]: 2025-11-08 00:21:24.738 [INFO][4345] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-5blmv" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0" Nov 8 00:21:24.904169 containerd[1591]: 2025-11-08 00:21:24.818 [INFO][4382] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb" HandleID="k8s-pod-network.f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0" Nov 8 00:21:24.904169 containerd[1591]: 2025-11-08 00:21:24.819 [INFO][4382] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb" HandleID="k8s-pod-network.f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003be2a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-01b3a4b0a8", "pod":"coredns-668d6bf9bc-5blmv", "timestamp":"2025-11-08 00:21:24.818821241 +0000 UTC"}, Hostname:"ci-4081.3.6-n-01b3a4b0a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:24.904169 containerd[1591]: 2025-11-08 00:21:24.819 [INFO][4382] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:24.904169 containerd[1591]: 2025-11-08 00:21:24.819 [INFO][4382] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:24.904169 containerd[1591]: 2025-11-08 00:21:24.819 [INFO][4382] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-01b3a4b0a8' Nov 8 00:21:24.904169 containerd[1591]: 2025-11-08 00:21:24.832 [INFO][4382] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:24.904169 containerd[1591]: 2025-11-08 00:21:24.841 [INFO][4382] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:24.904169 containerd[1591]: 2025-11-08 00:21:24.846 [INFO][4382] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:24.904169 containerd[1591]: 2025-11-08 00:21:24.849 [INFO][4382] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:24.904169 containerd[1591]: 2025-11-08 00:21:24.852 [INFO][4382] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:24.904169 containerd[1591]: 2025-11-08 00:21:24.852 [INFO][4382] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:24.904169 containerd[1591]: 2025-11-08 00:21:24.853 [INFO][4382] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb Nov 8 00:21:24.904169 containerd[1591]: 2025-11-08 00:21:24.857 [INFO][4382] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:24.904169 containerd[1591]: 2025-11-08 00:21:24.866 [INFO][4382] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.66/26] block=192.168.38.64/26 handle="k8s-pod-network.f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:24.904169 containerd[1591]: 2025-11-08 00:21:24.866 [INFO][4382] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.66/26] handle="k8s-pod-network.f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:24.904169 containerd[1591]: 2025-11-08 00:21:24.866 [INFO][4382] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:24.904169 containerd[1591]: 2025-11-08 00:21:24.866 [INFO][4382] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.66/26] IPv6=[] ContainerID="f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb" HandleID="k8s-pod-network.f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0" Nov 8 00:21:24.905166 containerd[1591]: 2025-11-08 00:21:24.873 [INFO][4345] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-5blmv" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"aed9a615-02c1-40d6-81ad-65033e8e154c", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"", Pod:"coredns-668d6bf9bc-5blmv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa7ce56622c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:24.905166 containerd[1591]: 2025-11-08 00:21:24.873 [INFO][4345] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.66/32] ContainerID="f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-5blmv" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0" Nov 8 00:21:24.905166 containerd[1591]: 2025-11-08 00:21:24.873 [INFO][4345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa7ce56622c ContainerID="f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-5blmv" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0" Nov 8 00:21:24.905166 containerd[1591]: 2025-11-08 00:21:24.878 [INFO][4345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-5blmv" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0" Nov 8 00:21:24.905166 containerd[1591]: 2025-11-08 00:21:24.880 [INFO][4345] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-5blmv" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"aed9a615-02c1-40d6-81ad-65033e8e154c", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb", Pod:"coredns-668d6bf9bc-5blmv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa7ce56622c", MAC:"02:59:32:1c:f1:b8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:24.905166 containerd[1591]: 2025-11-08 00:21:24.900 [INFO][4345] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-5blmv" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0" Nov 8 00:21:24.939074 containerd[1591]: time="2025-11-08T00:21:24.938762515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:24.939074 containerd[1591]: time="2025-11-08T00:21:24.938845539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:24.939074 containerd[1591]: time="2025-11-08T00:21:24.938861239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:24.939074 containerd[1591]: time="2025-11-08T00:21:24.939000076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:25.015980 systemd-networkd[1222]: cali1c08bba9206: Link UP Nov 8 00:21:25.018137 systemd-networkd[1222]: cali1c08bba9206: Gained carrier Nov 8 00:21:25.044889 containerd[1591]: time="2025-11-08T00:21:25.044823107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5blmv,Uid:aed9a615-02c1-40d6-81ad-65033e8e154c,Namespace:kube-system,Attempt:1,} returns sandbox id \"f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb\"" Nov 8 00:21:25.046569 kubelet[2695]: E1108 00:21:25.046533 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:25.049666 containerd[1591]: 2025-11-08 00:21:24.744 [INFO][4336] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0 goldmane-666569f655- calico-system 6e197bac-6071-4052-8e5a-3a64d2035a47 942 0 2025-11-08 00:20:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-01b3a4b0a8 goldmane-666569f655-gs456 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1c08bba9206 [] [] }} ContainerID="6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d" Namespace="calico-system" Pod="goldmane-666569f655-gs456" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-" Nov 8 00:21:25.049666 containerd[1591]: 2025-11-08 00:21:24.745 [INFO][4336] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d" Namespace="calico-system" Pod="goldmane-666569f655-gs456" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0" Nov 8 00:21:25.049666 containerd[1591]: 2025-11-08 00:21:24.824 [INFO][4388] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d" HandleID="k8s-pod-network.6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0" Nov 8 00:21:25.049666 containerd[1591]: 2025-11-08 00:21:24.826 [INFO][4388] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d" HandleID="k8s-pod-network.6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-01b3a4b0a8", "pod":"goldmane-666569f655-gs456", "timestamp":"2025-11-08 00:21:24.821295128 +0000 UTC"}, Hostname:"ci-4081.3.6-n-01b3a4b0a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:25.049666 containerd[1591]: 2025-11-08 00:21:24.826 [INFO][4388] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:25.049666 containerd[1591]: 2025-11-08 00:21:24.866 [INFO][4388] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:25.049666 containerd[1591]: 2025-11-08 00:21:24.866 [INFO][4388] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-01b3a4b0a8' Nov 8 00:21:25.049666 containerd[1591]: 2025-11-08 00:21:24.931 [INFO][4388] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.049666 containerd[1591]: 2025-11-08 00:21:24.944 [INFO][4388] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.049666 containerd[1591]: 2025-11-08 00:21:24.952 [INFO][4388] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.049666 containerd[1591]: 2025-11-08 00:21:24.957 [INFO][4388] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.049666 containerd[1591]: 2025-11-08 00:21:24.962 [INFO][4388] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.049666 containerd[1591]: 2025-11-08 00:21:24.962 [INFO][4388] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.049666 containerd[1591]: 2025-11-08 00:21:24.967 [INFO][4388] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d Nov 8 00:21:25.049666 containerd[1591]: 2025-11-08 00:21:24.975 [INFO][4388] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.049666 containerd[1591]: 2025-11-08 00:21:24.983 [INFO][4388] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.67/26] block=192.168.38.64/26 handle="k8s-pod-network.6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.049666 containerd[1591]: 2025-11-08 00:21:24.985 [INFO][4388] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.67/26] handle="k8s-pod-network.6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.049666 containerd[1591]: 2025-11-08 00:21:24.985 [INFO][4388] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:25.049666 containerd[1591]: 2025-11-08 00:21:24.985 [INFO][4388] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.67/26] IPv6=[] ContainerID="6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d" HandleID="k8s-pod-network.6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0" Nov 8 00:21:25.050332 containerd[1591]: 2025-11-08 00:21:24.990 [INFO][4336] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d" Namespace="calico-system" Pod="goldmane-666569f655-gs456" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6e197bac-6071-4052-8e5a-3a64d2035a47", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"", Pod:"goldmane-666569f655-gs456", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1c08bba9206", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:25.050332 containerd[1591]: 2025-11-08 00:21:24.991 [INFO][4336] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.67/32] ContainerID="6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d" Namespace="calico-system" Pod="goldmane-666569f655-gs456" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0" Nov 8 00:21:25.050332 containerd[1591]: 2025-11-08 00:21:24.991 [INFO][4336] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c08bba9206 ContainerID="6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d" Namespace="calico-system" Pod="goldmane-666569f655-gs456" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0" Nov 8 00:21:25.050332 containerd[1591]: 2025-11-08 00:21:25.018 [INFO][4336] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d" Namespace="calico-system" Pod="goldmane-666569f655-gs456" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0" Nov 8 00:21:25.050332 containerd[1591]: 2025-11-08 00:21:25.019 [INFO][4336] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d" Namespace="calico-system" Pod="goldmane-666569f655-gs456" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6e197bac-6071-4052-8e5a-3a64d2035a47", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d", Pod:"goldmane-666569f655-gs456", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1c08bba9206", MAC:"02:16:c7:a4:2e:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:25.050332 containerd[1591]: 2025-11-08 00:21:25.038 [INFO][4336] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d" Namespace="calico-system" Pod="goldmane-666569f655-gs456" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0" Nov 8 00:21:25.057306 containerd[1591]: time="2025-11-08T00:21:25.056362509Z" level=info msg="CreateContainer within sandbox \"f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:21:25.111029 containerd[1591]: time="2025-11-08T00:21:25.110669657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:25.111029 containerd[1591]: time="2025-11-08T00:21:25.110775040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:25.113426 containerd[1591]: time="2025-11-08T00:21:25.110800943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:25.114794 containerd[1591]: time="2025-11-08T00:21:25.113799011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:25.138839 systemd-networkd[1222]: cali96ead31dd33: Link UP Nov 8 00:21:25.154056 containerd[1591]: time="2025-11-08T00:21:25.153997309Z" level=info msg="CreateContainer within sandbox \"f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee7b1d793304698ec7d7450da053933c60c26fd1d389df0e0786060eaccf0d41\"" Nov 8 00:21:25.163559 systemd-networkd[1222]: cali96ead31dd33: Gained carrier Nov 8 00:21:25.167784 containerd[1591]: time="2025-11-08T00:21:25.167572030Z" level=info msg="StartContainer for \"ee7b1d793304698ec7d7450da053933c60c26fd1d389df0e0786060eaccf0d41\"" Nov 8 00:21:25.216037 containerd[1591]: 2025-11-08 00:21:24.775 [INFO][4357] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0 calico-apiserver-5b65b9d44c- calico-apiserver e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5 941 0 2025-11-08 00:20:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b65b9d44c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-01b3a4b0a8 calico-apiserver-5b65b9d44c-vc5w2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali96ead31dd33 [] [] }} ContainerID="4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608" Namespace="calico-apiserver" Pod="calico-apiserver-5b65b9d44c-vc5w2" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-" Nov 8 00:21:25.216037 containerd[1591]: 2025-11-08 00:21:24.775 [INFO][4357] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608" Namespace="calico-apiserver" Pod="calico-apiserver-5b65b9d44c-vc5w2" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0" Nov 8 00:21:25.216037 containerd[1591]: 2025-11-08 00:21:24.831 [INFO][4400] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608" HandleID="k8s-pod-network.4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0" Nov 8 00:21:25.216037 containerd[1591]: 2025-11-08 00:21:24.831 [INFO][4400] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608" HandleID="k8s-pod-network.4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-01b3a4b0a8", "pod":"calico-apiserver-5b65b9d44c-vc5w2", "timestamp":"2025-11-08 00:21:24.831700752 +0000 UTC"}, Hostname:"ci-4081.3.6-n-01b3a4b0a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:25.216037 containerd[1591]: 2025-11-08 00:21:24.832 [INFO][4400] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:25.216037 containerd[1591]: 2025-11-08 00:21:24.985 [INFO][4400] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:25.216037 containerd[1591]: 2025-11-08 00:21:24.986 [INFO][4400] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-01b3a4b0a8' Nov 8 00:21:25.216037 containerd[1591]: 2025-11-08 00:21:25.033 [INFO][4400] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.216037 containerd[1591]: 2025-11-08 00:21:25.056 [INFO][4400] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.216037 containerd[1591]: 2025-11-08 00:21:25.072 [INFO][4400] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.216037 containerd[1591]: 2025-11-08 00:21:25.076 [INFO][4400] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.216037 containerd[1591]: 2025-11-08 00:21:25.082 [INFO][4400] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.216037 containerd[1591]: 2025-11-08 00:21:25.082 [INFO][4400] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.216037 containerd[1591]: 2025-11-08 00:21:25.087 [INFO][4400] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608 Nov 8 00:21:25.216037 containerd[1591]: 2025-11-08 00:21:25.101 [INFO][4400] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.216037 containerd[1591]: 2025-11-08 00:21:25.114 [INFO][4400] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.68/26] block=192.168.38.64/26 handle="k8s-pod-network.4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.216037 containerd[1591]: 2025-11-08 00:21:25.114 [INFO][4400] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.68/26] handle="k8s-pod-network.4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.216037 containerd[1591]: 2025-11-08 00:21:25.115 [INFO][4400] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:25.216037 containerd[1591]: 2025-11-08 00:21:25.115 [INFO][4400] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.68/26] IPv6=[] ContainerID="4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608" HandleID="k8s-pod-network.4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0" Nov 8 00:21:25.216701 containerd[1591]: 2025-11-08 00:21:25.127 [INFO][4357] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608" Namespace="calico-apiserver" Pod="calico-apiserver-5b65b9d44c-vc5w2" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0", GenerateName:"calico-apiserver-5b65b9d44c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b65b9d44c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"", Pod:"calico-apiserver-5b65b9d44c-vc5w2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96ead31dd33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:25.216701 containerd[1591]: 2025-11-08 00:21:25.127 [INFO][4357] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.68/32] ContainerID="4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608" Namespace="calico-apiserver" Pod="calico-apiserver-5b65b9d44c-vc5w2" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0" Nov 8 00:21:25.216701 containerd[1591]: 2025-11-08 00:21:25.128 [INFO][4357] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96ead31dd33 ContainerID="4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608" Namespace="calico-apiserver" Pod="calico-apiserver-5b65b9d44c-vc5w2" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0" Nov 8 00:21:25.216701 containerd[1591]: 2025-11-08 00:21:25.161 [INFO][4357] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608" Namespace="calico-apiserver" Pod="calico-apiserver-5b65b9d44c-vc5w2" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0" Nov 8 00:21:25.216701 containerd[1591]: 2025-11-08 00:21:25.170 [INFO][4357] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608" Namespace="calico-apiserver" Pod="calico-apiserver-5b65b9d44c-vc5w2" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0", GenerateName:"calico-apiserver-5b65b9d44c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b65b9d44c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608", Pod:"calico-apiserver-5b65b9d44c-vc5w2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96ead31dd33", MAC:"96:52:11:41:bd:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:25.216701 containerd[1591]: 2025-11-08 00:21:25.194 [INFO][4357] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608" Namespace="calico-apiserver" Pod="calico-apiserver-5b65b9d44c-vc5w2" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0" Nov 8 00:21:25.242558 systemd-networkd[1222]: cali71b8b794de3: Link UP Nov 8 00:21:25.252480 systemd-networkd[1222]: cali71b8b794de3: Gained carrier Nov 8 00:21:25.270553 containerd[1591]: 2025-11-08 00:21:24.751 [INFO][4368] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0 calico-apiserver-5b65b9d44c- calico-apiserver 8d5058f1-2a34-4b46-bc5b-60d93e86f9f4 944 0 2025-11-08 00:20:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b65b9d44c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-01b3a4b0a8 calico-apiserver-5b65b9d44c-ld5bj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali71b8b794de3 [] [] }} ContainerID="a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90" Namespace="calico-apiserver" Pod="calico-apiserver-5b65b9d44c-ld5bj" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-" Nov 8 00:21:25.270553 containerd[1591]: 2025-11-08 00:21:24.755 [INFO][4368] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90" Namespace="calico-apiserver" Pod="calico-apiserver-5b65b9d44c-ld5bj" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0" Nov 8 00:21:25.270553 containerd[1591]: 2025-11-08 00:21:24.837 [INFO][4394] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90" HandleID="k8s-pod-network.a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0" Nov 8 00:21:25.270553 containerd[1591]: 2025-11-08 00:21:24.838 [INFO][4394] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90" HandleID="k8s-pod-network.a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-01b3a4b0a8", "pod":"calico-apiserver-5b65b9d44c-ld5bj", "timestamp":"2025-11-08 00:21:24.837830374 +0000 UTC"}, Hostname:"ci-4081.3.6-n-01b3a4b0a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:25.270553 containerd[1591]: 2025-11-08 00:21:24.838 [INFO][4394] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:25.270553 containerd[1591]: 2025-11-08 00:21:25.115 [INFO][4394] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:25.270553 containerd[1591]: 2025-11-08 00:21:25.116 [INFO][4394] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-01b3a4b0a8' Nov 8 00:21:25.270553 containerd[1591]: 2025-11-08 00:21:25.132 [INFO][4394] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.270553 containerd[1591]: 2025-11-08 00:21:25.155 [INFO][4394] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.270553 containerd[1591]: 2025-11-08 00:21:25.168 [INFO][4394] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.270553 containerd[1591]: 2025-11-08 00:21:25.172 [INFO][4394] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.270553 containerd[1591]: 2025-11-08 00:21:25.178 [INFO][4394] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.270553 containerd[1591]: 2025-11-08 00:21:25.178 [INFO][4394] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.270553 containerd[1591]: 2025-11-08 00:21:25.182 [INFO][4394] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90 Nov 8 00:21:25.270553 containerd[1591]: 2025-11-08 00:21:25.208 [INFO][4394] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.270553 containerd[1591]: 2025-11-08 00:21:25.229 [INFO][4394] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.69/26] block=192.168.38.64/26 handle="k8s-pod-network.a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.270553 containerd[1591]: 2025-11-08 00:21:25.230 [INFO][4394] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.69/26] handle="k8s-pod-network.a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:25.270553 containerd[1591]: 2025-11-08 00:21:25.230 [INFO][4394] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:25.270553 containerd[1591]: 2025-11-08 00:21:25.230 [INFO][4394] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.69/26] IPv6=[] ContainerID="a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90" HandleID="k8s-pod-network.a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0" Nov 8 00:21:25.273686 containerd[1591]: 2025-11-08 00:21:25.237 [INFO][4368] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90" Namespace="calico-apiserver" Pod="calico-apiserver-5b65b9d44c-ld5bj" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0", GenerateName:"calico-apiserver-5b65b9d44c-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d5058f1-2a34-4b46-bc5b-60d93e86f9f4", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b65b9d44c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"", Pod:"calico-apiserver-5b65b9d44c-ld5bj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali71b8b794de3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:25.273686 containerd[1591]: 2025-11-08 00:21:25.238 [INFO][4368] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.69/32] ContainerID="a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90" Namespace="calico-apiserver" Pod="calico-apiserver-5b65b9d44c-ld5bj" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0" Nov 8 00:21:25.273686 containerd[1591]: 2025-11-08 00:21:25.238 [INFO][4368] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71b8b794de3 ContainerID="a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90" Namespace="calico-apiserver" Pod="calico-apiserver-5b65b9d44c-ld5bj" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0" Nov 8 00:21:25.273686 containerd[1591]: 2025-11-08 00:21:25.244 [INFO][4368] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90" Namespace="calico-apiserver" Pod="calico-apiserver-5b65b9d44c-ld5bj" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0" Nov 8 00:21:25.273686 containerd[1591]: 2025-11-08 00:21:25.244 [INFO][4368] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90" Namespace="calico-apiserver" Pod="calico-apiserver-5b65b9d44c-ld5bj" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0", GenerateName:"calico-apiserver-5b65b9d44c-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d5058f1-2a34-4b46-bc5b-60d93e86f9f4", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b65b9d44c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90", Pod:"calico-apiserver-5b65b9d44c-ld5bj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali71b8b794de3", MAC:"ca:e7:ff:cb:cb:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:25.273686 containerd[1591]: 2025-11-08 00:21:25.262 [INFO][4368] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90" Namespace="calico-apiserver" Pod="calico-apiserver-5b65b9d44c-ld5bj" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0" Nov 8 00:21:25.320379 containerd[1591]: time="2025-11-08T00:21:25.317821799Z" level=info msg="StopPodSandbox for \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\"" Nov 8 00:21:25.339739 containerd[1591]: time="2025-11-08T00:21:25.338628792Z" level=info msg="StopPodSandbox for \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\"" Nov 8 00:21:25.360965 containerd[1591]: time="2025-11-08T00:21:25.357111663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:25.360965 containerd[1591]: time="2025-11-08T00:21:25.357638282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:25.360965 containerd[1591]: time="2025-11-08T00:21:25.357661286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:25.360965 containerd[1591]: time="2025-11-08T00:21:25.358441465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:25.405799 containerd[1591]: time="2025-11-08T00:21:25.403981308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gs456,Uid:6e197bac-6071-4052-8e5a-3a64d2035a47,Namespace:calico-system,Attempt:1,} returns sandbox id \"6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d\"" Nov 8 00:21:25.409485 containerd[1591]: time="2025-11-08T00:21:25.409242705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:21:25.412252 containerd[1591]: time="2025-11-08T00:21:25.410721616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:25.412252 containerd[1591]: time="2025-11-08T00:21:25.412026637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:25.412252 containerd[1591]: time="2025-11-08T00:21:25.412057823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:25.412252 containerd[1591]: time="2025-11-08T00:21:25.412205747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:25.425745 containerd[1591]: time="2025-11-08T00:21:25.425683383Z" level=info msg="StartContainer for \"ee7b1d793304698ec7d7450da053933c60c26fd1d389df0e0786060eaccf0d41\" returns successfully" Nov 8 00:21:25.606576 systemd[1]: run-netns-cni\x2d77849f52\x2d1712\x2d8f67\x2d4c0d\x2d0f00846a68ed.mount: Deactivated successfully. Nov 8 00:21:25.657374 containerd[1591]: 2025-11-08 00:21:25.501 [INFO][4604] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Nov 8 00:21:25.657374 containerd[1591]: 2025-11-08 00:21:25.502 [INFO][4604] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" iface="eth0" netns="/var/run/netns/cni-88ad0134-0dee-dffc-0085-0f575a403d36" Nov 8 00:21:25.657374 containerd[1591]: 2025-11-08 00:21:25.504 [INFO][4604] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" iface="eth0" netns="/var/run/netns/cni-88ad0134-0dee-dffc-0085-0f575a403d36" Nov 8 00:21:25.657374 containerd[1591]: 2025-11-08 00:21:25.505 [INFO][4604] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" iface="eth0" netns="/var/run/netns/cni-88ad0134-0dee-dffc-0085-0f575a403d36" Nov 8 00:21:25.657374 containerd[1591]: 2025-11-08 00:21:25.505 [INFO][4604] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Nov 8 00:21:25.657374 containerd[1591]: 2025-11-08 00:21:25.505 [INFO][4604] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Nov 8 00:21:25.657374 containerd[1591]: 2025-11-08 00:21:25.616 [INFO][4669] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" HandleID="k8s-pod-network.e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0" Nov 8 00:21:25.657374 containerd[1591]: 2025-11-08 00:21:25.618 [INFO][4669] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:25.657374 containerd[1591]: 2025-11-08 00:21:25.618 [INFO][4669] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:25.657374 containerd[1591]: 2025-11-08 00:21:25.632 [WARNING][4669] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" HandleID="k8s-pod-network.e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0" Nov 8 00:21:25.657374 containerd[1591]: 2025-11-08 00:21:25.632 [INFO][4669] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" HandleID="k8s-pod-network.e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0" Nov 8 00:21:25.657374 containerd[1591]: 2025-11-08 00:21:25.638 [INFO][4669] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:25.657374 containerd[1591]: 2025-11-08 00:21:25.648 [INFO][4604] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Nov 8 00:21:25.666383 containerd[1591]: time="2025-11-08T00:21:25.665491102Z" level=info msg="TearDown network for sandbox \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\" successfully" Nov 8 00:21:25.666383 containerd[1591]: time="2025-11-08T00:21:25.665737832Z" level=info msg="StopPodSandbox for \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\" returns successfully" Nov 8 00:21:25.670571 systemd[1]: run-netns-cni\x2d88ad0134\x2d0dee\x2ddffc\x2d0085\x2d0f575a403d36.mount: Deactivated successfully. Nov 8 00:21:25.672271 containerd[1591]: time="2025-11-08T00:21:25.670986223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78689fc948-mm7k2,Uid:571339fa-a980-4274-be42-77b940705c5d,Namespace:calico-system,Attempt:1,}" Nov 8 00:21:25.742119 containerd[1591]: time="2025-11-08T00:21:25.742082117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b65b9d44c-vc5w2,Uid:e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608\"" Nov 8 00:21:25.745032 kubelet[2695]: E1108 00:21:25.744996 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:25.746150 containerd[1591]: 2025-11-08 00:21:25.553 [INFO][4617] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Nov 8 00:21:25.746150 containerd[1591]: 2025-11-08 00:21:25.555 [INFO][4617] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" iface="eth0" netns="/var/run/netns/cni-3dba28b9-1ef4-08aa-68a8-ca9b5c8f23de" Nov 8 00:21:25.746150 containerd[1591]: 2025-11-08 00:21:25.556 [INFO][4617] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" iface="eth0" netns="/var/run/netns/cni-3dba28b9-1ef4-08aa-68a8-ca9b5c8f23de" Nov 8 00:21:25.746150 containerd[1591]: 2025-11-08 00:21:25.558 [INFO][4617] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" iface="eth0" netns="/var/run/netns/cni-3dba28b9-1ef4-08aa-68a8-ca9b5c8f23de" Nov 8 00:21:25.746150 containerd[1591]: 2025-11-08 00:21:25.558 [INFO][4617] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Nov 8 00:21:25.746150 containerd[1591]: 2025-11-08 00:21:25.558 [INFO][4617] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Nov 8 00:21:25.746150 containerd[1591]: 2025-11-08 00:21:25.681 [INFO][4676] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" HandleID="k8s-pod-network.c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0" Nov 8 00:21:25.746150 containerd[1591]: 2025-11-08 00:21:25.682 [INFO][4676] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:25.746150 containerd[1591]: 2025-11-08 00:21:25.682 [INFO][4676] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:25.746150 containerd[1591]: 2025-11-08 00:21:25.699 [WARNING][4676] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" HandleID="k8s-pod-network.c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0" Nov 8 00:21:25.746150 containerd[1591]: 2025-11-08 00:21:25.699 [INFO][4676] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" HandleID="k8s-pod-network.c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0" Nov 8 00:21:25.746150 containerd[1591]: 2025-11-08 00:21:25.712 [INFO][4676] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:25.746150 containerd[1591]: 2025-11-08 00:21:25.732 [INFO][4617] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Nov 8 00:21:25.753519 containerd[1591]: time="2025-11-08T00:21:25.749175423Z" level=info msg="TearDown network for sandbox \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\" successfully" Nov 8 00:21:25.753519 containerd[1591]: time="2025-11-08T00:21:25.749204770Z" level=info msg="StopPodSandbox for \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\" returns successfully" Nov 8 00:21:25.756800 systemd[1]: run-netns-cni\x2d3dba28b9\x2d1ef4\x2d08aa\x2d68a8\x2dca9b5c8f23de.mount: Deactivated successfully. Nov 8 00:21:25.759110 kubelet[2695]: E1108 00:21:25.757641 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:25.764837 containerd[1591]: time="2025-11-08T00:21:25.762896432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbp7m,Uid:5b85782e-ef51-43c2-92d6-7721ec39bac1,Namespace:kube-system,Attempt:1,}" Nov 8 00:21:25.771495 containerd[1591]: time="2025-11-08T00:21:25.770322783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b65b9d44c-ld5bj,Uid:8d5058f1-2a34-4b46-bc5b-60d93e86f9f4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90\"" Nov 8 00:21:25.782226 kubelet[2695]: I1108 00:21:25.779123 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5blmv" podStartSLOduration=41.778919973 podStartE2EDuration="41.778919973s" podCreationTimestamp="2025-11-08 00:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:21:25.778854664 +0000 UTC m=+46.695622897" watchObservedRunningTime="2025-11-08 00:21:25.778919973 +0000 UTC m=+46.695688206" Nov 8 00:21:25.961047 containerd[1591]: time="2025-11-08T00:21:25.960220420Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:25.964639 containerd[1591]: time="2025-11-08T00:21:25.961371040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:21:25.964639 containerd[1591]: time="2025-11-08T00:21:25.961780264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:25.964819 kubelet[2695]: E1108 00:21:25.962295 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:21:25.964819 kubelet[2695]: E1108 00:21:25.962362 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:21:25.964819 kubelet[2695]: E1108 00:21:25.962810 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lbgw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gs456_calico-system(6e197bac-6071-4052-8e5a-3a64d2035a47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:25.964819 kubelet[2695]: E1108 00:21:25.964150 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gs456" podUID="6e197bac-6071-4052-8e5a-3a64d2035a47" Nov 8 00:21:25.970152 containerd[1591]: time="2025-11-08T00:21:25.965376997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:21:25.969816 systemd-networkd[1222]: cali7715faeb73d: Link UP Nov 8 00:21:25.970019 systemd-networkd[1222]: cali7715faeb73d: Gained carrier Nov 8 00:21:26.015937 containerd[1591]: 2025-11-08 00:21:25.858 [INFO][4694] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0 calico-kube-controllers-78689fc948- calico-system 571339fa-a980-4274-be42-77b940705c5d 968 0 2025-11-08 00:21:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78689fc948 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-01b3a4b0a8 calico-kube-controllers-78689fc948-mm7k2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7715faeb73d [] [] }} ContainerID="8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a" Namespace="calico-system" Pod="calico-kube-controllers-78689fc948-mm7k2" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-" Nov 8 00:21:26.015937 containerd[1591]: 2025-11-08 00:21:25.858 [INFO][4694] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a" Namespace="calico-system" Pod="calico-kube-controllers-78689fc948-mm7k2" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0" Nov 8 00:21:26.015937 containerd[1591]: 2025-11-08 00:21:25.907 [INFO][4725] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a" HandleID="k8s-pod-network.8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0" Nov 8 00:21:26.015937 containerd[1591]: 2025-11-08 00:21:25.907 [INFO][4725] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a" HandleID="k8s-pod-network.8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-01b3a4b0a8", "pod":"calico-kube-controllers-78689fc948-mm7k2", "timestamp":"2025-11-08 00:21:25.907624158 +0000 UTC"}, Hostname:"ci-4081.3.6-n-01b3a4b0a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:26.015937 containerd[1591]: 2025-11-08 00:21:25.907 [INFO][4725] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:26.015937 containerd[1591]: 2025-11-08 00:21:25.907 [INFO][4725] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:26.015937 containerd[1591]: 2025-11-08 00:21:25.907 [INFO][4725] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-01b3a4b0a8' Nov 8 00:21:26.015937 containerd[1591]: 2025-11-08 00:21:25.917 [INFO][4725] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:26.015937 containerd[1591]: 2025-11-08 00:21:25.924 [INFO][4725] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:26.015937 containerd[1591]: 2025-11-08 00:21:25.929 [INFO][4725] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:26.015937 containerd[1591]: 2025-11-08 00:21:25.933 [INFO][4725] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:26.015937 containerd[1591]: 2025-11-08 00:21:25.938 [INFO][4725] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:26.015937 containerd[1591]: 2025-11-08 00:21:25.938 [INFO][4725] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:26.015937 containerd[1591]: 2025-11-08 00:21:25.942 [INFO][4725] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a Nov 8 00:21:26.015937 containerd[1591]: 2025-11-08 00:21:25.947 [INFO][4725] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:26.015937 containerd[1591]: 2025-11-08 00:21:25.956 [INFO][4725] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.70/26] block=192.168.38.64/26 handle="k8s-pod-network.8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:26.015937 containerd[1591]: 2025-11-08 00:21:25.956 [INFO][4725] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.70/26] handle="k8s-pod-network.8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:26.015937 containerd[1591]: 2025-11-08 00:21:25.956 [INFO][4725] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:26.015937 containerd[1591]: 2025-11-08 00:21:25.956 [INFO][4725] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.70/26] IPv6=[] ContainerID="8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a" HandleID="k8s-pod-network.8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0" Nov 8 00:21:26.016709 containerd[1591]: 2025-11-08 00:21:25.960 [INFO][4694] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a" Namespace="calico-system" Pod="calico-kube-controllers-78689fc948-mm7k2" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0", GenerateName:"calico-kube-controllers-78689fc948-", Namespace:"calico-system", SelfLink:"", UID:"571339fa-a980-4274-be42-77b940705c5d", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78689fc948", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"", Pod:"calico-kube-controllers-78689fc948-mm7k2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7715faeb73d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:26.016709 containerd[1591]: 2025-11-08 00:21:25.960 [INFO][4694] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.70/32] ContainerID="8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a" Namespace="calico-system" Pod="calico-kube-controllers-78689fc948-mm7k2" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0" Nov 8 00:21:26.016709 containerd[1591]: 2025-11-08 00:21:25.961 [INFO][4694] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7715faeb73d ContainerID="8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a" Namespace="calico-system" Pod="calico-kube-controllers-78689fc948-mm7k2" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0" Nov 8 00:21:26.016709 containerd[1591]: 2025-11-08 00:21:25.965 [INFO][4694] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a" Namespace="calico-system" Pod="calico-kube-controllers-78689fc948-mm7k2" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0" Nov 8 00:21:26.016709 containerd[1591]: 2025-11-08 00:21:25.966 [INFO][4694] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a" Namespace="calico-system" Pod="calico-kube-controllers-78689fc948-mm7k2" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0", GenerateName:"calico-kube-controllers-78689fc948-", Namespace:"calico-system", SelfLink:"", UID:"571339fa-a980-4274-be42-77b940705c5d", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78689fc948", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a", Pod:"calico-kube-controllers-78689fc948-mm7k2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7715faeb73d", MAC:"6a:b1:ce:0c:09:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:26.016709 containerd[1591]: 2025-11-08 00:21:26.003 [INFO][4694] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a" Namespace="calico-system" Pod="calico-kube-controllers-78689fc948-mm7k2" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0" Nov 8 00:21:26.116763 containerd[1591]: time="2025-11-08T00:21:26.114826831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:26.116763 containerd[1591]: time="2025-11-08T00:21:26.114920981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:26.116763 containerd[1591]: time="2025-11-08T00:21:26.114937177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:26.116763 containerd[1591]: time="2025-11-08T00:21:26.115050712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:26.148168 systemd-networkd[1222]: cali9f4b25b1300: Link UP Nov 8 00:21:26.153625 systemd-networkd[1222]: cali9f4b25b1300: Gained carrier Nov 8 00:21:26.194535 containerd[1591]: 2025-11-08 00:21:25.887 [INFO][4709] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0 coredns-668d6bf9bc- kube-system 5b85782e-ef51-43c2-92d6-7721ec39bac1 969 0 2025-11-08 00:20:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-01b3a4b0a8 coredns-668d6bf9bc-cbp7m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9f4b25b1300 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbp7m" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-" Nov 8 00:21:26.194535 containerd[1591]: 2025-11-08 00:21:25.888 [INFO][4709] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbp7m" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0" Nov 8 00:21:26.194535 containerd[1591]: 2025-11-08 00:21:25.939 [INFO][4731] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613" HandleID="k8s-pod-network.e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0" Nov 8 00:21:26.194535 containerd[1591]: 2025-11-08 00:21:25.939 [INFO][4731] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613" HandleID="k8s-pod-network.e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332c00), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-01b3a4b0a8", "pod":"coredns-668d6bf9bc-cbp7m", "timestamp":"2025-11-08 00:21:25.939038236 +0000 UTC"}, Hostname:"ci-4081.3.6-n-01b3a4b0a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:26.194535 containerd[1591]: 2025-11-08 00:21:25.939 [INFO][4731] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:26.194535 containerd[1591]: 2025-11-08 00:21:25.956 [INFO][4731] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:26.194535 containerd[1591]: 2025-11-08 00:21:25.956 [INFO][4731] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-01b3a4b0a8' Nov 8 00:21:26.194535 containerd[1591]: 2025-11-08 00:21:26.031 [INFO][4731] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:26.194535 containerd[1591]: 2025-11-08 00:21:26.048 [INFO][4731] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:26.194535 containerd[1591]: 2025-11-08 00:21:26.065 [INFO][4731] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:26.194535 containerd[1591]: 2025-11-08 00:21:26.069 [INFO][4731] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:26.194535 containerd[1591]: 2025-11-08 00:21:26.089 [INFO][4731] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:26.194535 containerd[1591]: 2025-11-08 00:21:26.089 [INFO][4731] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:26.194535 containerd[1591]: 2025-11-08 00:21:26.097 [INFO][4731] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613 Nov 8 00:21:26.194535 containerd[1591]: 2025-11-08 00:21:26.106 [INFO][4731] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:26.194535 containerd[1591]: 2025-11-08 00:21:26.131 [INFO][4731] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.71/26] block=192.168.38.64/26 handle="k8s-pod-network.e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:26.194535 containerd[1591]: 2025-11-08 00:21:26.131 [INFO][4731] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.71/26] handle="k8s-pod-network.e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:26.194535 containerd[1591]: 2025-11-08 00:21:26.131 [INFO][4731] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:26.194535 containerd[1591]: 2025-11-08 00:21:26.132 [INFO][4731] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.71/26] IPv6=[] ContainerID="e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613" HandleID="k8s-pod-network.e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0" Nov 8 00:21:26.200065 containerd[1591]: 2025-11-08 00:21:26.138 [INFO][4709] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbp7m" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5b85782e-ef51-43c2-92d6-7721ec39bac1", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"", Pod:"coredns-668d6bf9bc-cbp7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9f4b25b1300", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:26.200065 containerd[1591]: 2025-11-08 00:21:26.138 [INFO][4709] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.71/32] ContainerID="e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbp7m" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0" Nov 8 00:21:26.200065 containerd[1591]: 2025-11-08 00:21:26.138 [INFO][4709] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f4b25b1300 ContainerID="e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbp7m" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0" Nov 8 00:21:26.200065 containerd[1591]: 2025-11-08 00:21:26.149 [INFO][4709] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbp7m" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0" Nov 8 00:21:26.200065 containerd[1591]: 2025-11-08 00:21:26.156 [INFO][4709] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbp7m" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5b85782e-ef51-43c2-92d6-7721ec39bac1", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613", Pod:"coredns-668d6bf9bc-cbp7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9f4b25b1300", MAC:"9e:17:bf:da:a3:d1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:26.200065 containerd[1591]: 2025-11-08 00:21:26.180 [INFO][4709] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbp7m" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0" Nov 8 00:21:26.252189 containerd[1591]: time="2025-11-08T00:21:26.251673378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78689fc948-mm7k2,Uid:571339fa-a980-4274-be42-77b940705c5d,Namespace:calico-system,Attempt:1,} returns sandbox id \"8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a\"" Nov 8 00:21:26.258218 containerd[1591]: time="2025-11-08T00:21:26.256081088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:26.258218 containerd[1591]: time="2025-11-08T00:21:26.256155934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:26.258218 containerd[1591]: time="2025-11-08T00:21:26.256179851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:26.258218 containerd[1591]: time="2025-11-08T00:21:26.256311389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:26.310246 containerd[1591]: time="2025-11-08T00:21:26.309292596Z" level=info msg="StopPodSandbox for \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\"" Nov 8 00:21:26.333427 containerd[1591]: time="2025-11-08T00:21:26.332784398Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:26.335283 containerd[1591]: time="2025-11-08T00:21:26.334955642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:21:26.335283 containerd[1591]: time="2025-11-08T00:21:26.335010344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:26.336814 kubelet[2695]: E1108 00:21:26.336360 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:26.336814 kubelet[2695]: E1108 00:21:26.336418 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:26.338672 containerd[1591]: time="2025-11-08T00:21:26.338080574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:21:26.338834 kubelet[2695]: E1108 00:21:26.337856 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxthx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b65b9d44c-vc5w2_calico-apiserver(e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:26.339666 kubelet[2695]: E1108 00:21:26.339604 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b65b9d44c-vc5w2" podUID="e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5" Nov 8 00:21:26.359048 containerd[1591]: time="2025-11-08T00:21:26.358680097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbp7m,Uid:5b85782e-ef51-43c2-92d6-7721ec39bac1,Namespace:kube-system,Attempt:1,} returns sandbox id \"e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613\"" Nov 8 00:21:26.361742 kubelet[2695]: E1108 00:21:26.361196 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:26.370075 containerd[1591]: time="2025-11-08T00:21:26.369930070Z" level=info msg="CreateContainer within sandbox \"e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:21:26.390011 containerd[1591]: time="2025-11-08T00:21:26.389946803Z" level=info msg="CreateContainer within sandbox \"e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c267eff755f2a51f4d780c07dd59a779e2e9d92da1af238f0eeb45d005381fef\"" Nov 8 00:21:26.398569 containerd[1591]: time="2025-11-08T00:21:26.398379401Z" level=info msg="StartContainer for \"c267eff755f2a51f4d780c07dd59a779e2e9d92da1af238f0eeb45d005381fef\"" Nov 8 00:21:26.415726 systemd-networkd[1222]: cali71b8b794de3: Gained IPv6LL Nov 8 00:21:26.536090 containerd[1591]: time="2025-11-08T00:21:26.535959926Z" level=info msg="StartContainer for \"c267eff755f2a51f4d780c07dd59a779e2e9d92da1af238f0eeb45d005381fef\" returns successfully" Nov 8 00:21:26.583376 containerd[1591]: 2025-11-08 00:21:26.512 [INFO][4842] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Nov 8 00:21:26.583376 containerd[1591]: 2025-11-08 00:21:26.513 [INFO][4842] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" iface="eth0" netns="/var/run/netns/cni-0919b286-1c3c-43c0-9a2f-516293a14a42" Nov 8 00:21:26.583376 containerd[1591]: 2025-11-08 00:21:26.514 [INFO][4842] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" iface="eth0" netns="/var/run/netns/cni-0919b286-1c3c-43c0-9a2f-516293a14a42" Nov 8 00:21:26.583376 containerd[1591]: 2025-11-08 00:21:26.514 [INFO][4842] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" iface="eth0" netns="/var/run/netns/cni-0919b286-1c3c-43c0-9a2f-516293a14a42" Nov 8 00:21:26.583376 containerd[1591]: 2025-11-08 00:21:26.515 [INFO][4842] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Nov 8 00:21:26.583376 containerd[1591]: 2025-11-08 00:21:26.515 [INFO][4842] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Nov 8 00:21:26.583376 containerd[1591]: 2025-11-08 00:21:26.556 [INFO][4882] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" HandleID="k8s-pod-network.edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0" Nov 8 00:21:26.583376 containerd[1591]: 2025-11-08 00:21:26.556 [INFO][4882] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:26.583376 containerd[1591]: 2025-11-08 00:21:26.556 [INFO][4882] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:26.583376 containerd[1591]: 2025-11-08 00:21:26.566 [WARNING][4882] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" HandleID="k8s-pod-network.edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0" Nov 8 00:21:26.583376 containerd[1591]: 2025-11-08 00:21:26.566 [INFO][4882] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" HandleID="k8s-pod-network.edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0" Nov 8 00:21:26.583376 containerd[1591]: 2025-11-08 00:21:26.569 [INFO][4882] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:26.583376 containerd[1591]: 2025-11-08 00:21:26.574 [INFO][4842] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Nov 8 00:21:26.589620 containerd[1591]: time="2025-11-08T00:21:26.586222332Z" level=info msg="TearDown network for sandbox \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\" successfully" Nov 8 00:21:26.589620 containerd[1591]: time="2025-11-08T00:21:26.586422100Z" level=info msg="StopPodSandbox for \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\" returns successfully" Nov 8 00:21:26.590480 containerd[1591]: time="2025-11-08T00:21:26.590283031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2mnck,Uid:f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384,Namespace:calico-system,Attempt:1,}" Nov 8 00:21:26.612550 systemd[1]: run-netns-cni\x2d0919b286\x2d1c3c\x2d43c0\x2d9a2f\x2d516293a14a42.mount: Deactivated successfully. Nov 8 00:21:26.673287 systemd-networkd[1222]: califa7ce56622c: Gained IPv6LL Nov 8 00:21:26.675351 systemd-networkd[1222]: cali1c08bba9206: Gained IPv6LL Nov 8 00:21:26.676974 systemd-networkd[1222]: cali96ead31dd33: Gained IPv6LL Nov 8 00:21:26.681236 containerd[1591]: time="2025-11-08T00:21:26.681183016Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:26.682364 containerd[1591]: time="2025-11-08T00:21:26.682234485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:21:26.682364 containerd[1591]: time="2025-11-08T00:21:26.682322484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:26.682912 kubelet[2695]: E1108 00:21:26.682621 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:26.682912 kubelet[2695]: E1108 00:21:26.682672 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:26.685197 containerd[1591]: time="2025-11-08T00:21:26.684891620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:21:26.702951 kubelet[2695]: E1108 00:21:26.702871 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vz5wp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b65b9d44c-ld5bj_calico-apiserver(8d5058f1-2a34-4b46-bc5b-60d93e86f9f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:26.709651 kubelet[2695]: E1108 00:21:26.709595 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b65b9d44c-ld5bj" podUID="8d5058f1-2a34-4b46-bc5b-60d93e86f9f4" Nov 8 00:21:26.797150 kubelet[2695]: E1108 00:21:26.797016 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b65b9d44c-vc5w2" podUID="e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5" Nov 8 00:21:26.813247 kubelet[2695]: E1108 00:21:26.812515 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:26.845652 kubelet[2695]: I1108 00:21:26.845582 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cbp7m" podStartSLOduration=42.843345507 podStartE2EDuration="42.843345507s" podCreationTimestamp="2025-11-08 00:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:21:26.841404537 +0000 UTC m=+47.758172766" watchObservedRunningTime="2025-11-08 00:21:26.843345507 +0000 UTC m=+47.760113737" Nov 8 00:21:26.855591 kubelet[2695]: E1108 00:21:26.852553 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:26.857847 kubelet[2695]: E1108 00:21:26.855743 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gs456" podUID="6e197bac-6071-4052-8e5a-3a64d2035a47" Nov 8 00:21:26.857847 kubelet[2695]: E1108 00:21:26.855829 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b65b9d44c-ld5bj" podUID="8d5058f1-2a34-4b46-bc5b-60d93e86f9f4" Nov 8 00:21:26.979120 systemd-networkd[1222]: calic213aa70661: Link UP Nov 8 00:21:26.981515 systemd-networkd[1222]: calic213aa70661: Gained carrier Nov 8 00:21:27.013580 containerd[1591]: 2025-11-08 00:21:26.709 [INFO][4899] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0 csi-node-driver- calico-system f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384 994 0 2025-11-08 00:21:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-01b3a4b0a8 csi-node-driver-2mnck eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic213aa70661 [] [] }} ContainerID="40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21" Namespace="calico-system" Pod="csi-node-driver-2mnck" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-" Nov 8 00:21:27.013580 containerd[1591]: 2025-11-08 00:21:26.710 [INFO][4899] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21" Namespace="calico-system" Pod="csi-node-driver-2mnck" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0" Nov 8 00:21:27.013580 containerd[1591]: 2025-11-08 00:21:26.826 [INFO][4912] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21" HandleID="k8s-pod-network.40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0" Nov 8 00:21:27.013580 containerd[1591]: 2025-11-08 00:21:26.830 [INFO][4912] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21" HandleID="k8s-pod-network.40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f760), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-01b3a4b0a8", "pod":"csi-node-driver-2mnck", "timestamp":"2025-11-08 00:21:26.82627417 +0000 UTC"}, Hostname:"ci-4081.3.6-n-01b3a4b0a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:27.013580 containerd[1591]: 2025-11-08 00:21:26.830 [INFO][4912] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:27.013580 containerd[1591]: 2025-11-08 00:21:26.830 [INFO][4912] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:27.013580 containerd[1591]: 2025-11-08 00:21:26.830 [INFO][4912] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-01b3a4b0a8' Nov 8 00:21:27.013580 containerd[1591]: 2025-11-08 00:21:26.861 [INFO][4912] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:27.013580 containerd[1591]: 2025-11-08 00:21:26.903 [INFO][4912] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:27.013580 containerd[1591]: 2025-11-08 00:21:26.935 [INFO][4912] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:27.013580 containerd[1591]: 2025-11-08 00:21:26.942 [INFO][4912] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:27.013580 containerd[1591]: 2025-11-08 00:21:26.947 [INFO][4912] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:27.013580 containerd[1591]: 2025-11-08 00:21:26.947 [INFO][4912] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:27.013580 containerd[1591]: 2025-11-08 00:21:26.950 [INFO][4912] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21 Nov 8 00:21:27.013580 containerd[1591]: 2025-11-08 00:21:26.956 [INFO][4912] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:27.013580 containerd[1591]: 2025-11-08 00:21:26.965 [INFO][4912] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.72/26] block=192.168.38.64/26 handle="k8s-pod-network.40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:27.013580 containerd[1591]: 2025-11-08 00:21:26.965 [INFO][4912] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.72/26] handle="k8s-pod-network.40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21" host="ci-4081.3.6-n-01b3a4b0a8" Nov 8 00:21:27.013580 containerd[1591]: 2025-11-08 00:21:26.965 [INFO][4912] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:27.013580 containerd[1591]: 2025-11-08 00:21:26.965 [INFO][4912] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.72/26] IPv6=[] ContainerID="40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21" HandleID="k8s-pod-network.40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0" Nov 8 00:21:27.015265 containerd[1591]: 2025-11-08 00:21:26.971 [INFO][4899] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21" Namespace="calico-system" Pod="csi-node-driver-2mnck" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"", Pod:"csi-node-driver-2mnck", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic213aa70661", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:27.015265 containerd[1591]: 2025-11-08 00:21:26.971 [INFO][4899] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.72/32] ContainerID="40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21" Namespace="calico-system" Pod="csi-node-driver-2mnck" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0" Nov 8 00:21:27.015265 containerd[1591]: 2025-11-08 00:21:26.971 [INFO][4899] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic213aa70661 ContainerID="40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21" Namespace="calico-system" Pod="csi-node-driver-2mnck" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0" Nov 8 00:21:27.015265 containerd[1591]: 2025-11-08 00:21:26.988 [INFO][4899] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21" Namespace="calico-system" Pod="csi-node-driver-2mnck" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0" Nov 8 00:21:27.015265 containerd[1591]: 2025-11-08 00:21:26.991 [INFO][4899] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21" Namespace="calico-system" Pod="csi-node-driver-2mnck" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21", Pod:"csi-node-driver-2mnck", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic213aa70661", MAC:"de:db:64:81:af:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:27.015265 containerd[1591]: 2025-11-08 00:21:27.007 [INFO][4899] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21" Namespace="calico-system" Pod="csi-node-driver-2mnck" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0" Nov 8 00:21:27.047627 containerd[1591]: time="2025-11-08T00:21:27.047190941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:27.049224 containerd[1591]: time="2025-11-08T00:21:27.048361324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:27.049224 containerd[1591]: time="2025-11-08T00:21:27.048386003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:27.050373 containerd[1591]: time="2025-11-08T00:21:27.050096842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:27.141828 containerd[1591]: time="2025-11-08T00:21:27.141684486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2mnck,Uid:f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384,Namespace:calico-system,Attempt:1,} returns sandbox id \"40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21\"" Nov 8 00:21:27.568873 systemd-networkd[1222]: cali7715faeb73d: Gained IPv6LL Nov 8 00:21:27.859394 kubelet[2695]: E1108 00:21:27.858408 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:27.862546 kubelet[2695]: E1108 00:21:27.860325 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:27.863164 kubelet[2695]: E1108 00:21:27.862972 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b65b9d44c-vc5w2" podUID="e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5" Nov 8 00:21:27.863164 kubelet[2695]: E1108 00:21:27.863004 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b65b9d44c-ld5bj" podUID="8d5058f1-2a34-4b46-bc5b-60d93e86f9f4" Nov 8 00:21:28.038002 containerd[1591]: time="2025-11-08T00:21:28.037909303Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:28.038872 containerd[1591]: time="2025-11-08T00:21:28.038798959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:21:28.039007 containerd[1591]: time="2025-11-08T00:21:28.038934163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:21:28.039308 kubelet[2695]: E1108 00:21:28.039249 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:21:28.039399 kubelet[2695]: E1108 00:21:28.039322 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:21:28.040126 containerd[1591]: time="2025-11-08T00:21:28.039830467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:21:28.040235 kubelet[2695]: E1108 00:21:28.039925 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95h7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78689fc948-mm7k2_calico-system(571339fa-a980-4274-be42-77b940705c5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:28.041293 kubelet[2695]: E1108 00:21:28.041247 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78689fc948-mm7k2" podUID="571339fa-a980-4274-be42-77b940705c5d" Nov 8 00:21:28.079819 systemd-networkd[1222]: cali9f4b25b1300: Gained IPv6LL Nov 8 00:21:28.394603 containerd[1591]: time="2025-11-08T00:21:28.394540765Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:28.396689 containerd[1591]: time="2025-11-08T00:21:28.395329522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:21:28.396689 containerd[1591]: time="2025-11-08T00:21:28.395398613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:21:28.396874 kubelet[2695]: E1108 00:21:28.395639 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:21:28.396874 kubelet[2695]: E1108 00:21:28.395689 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:21:28.396874 kubelet[2695]: E1108 00:21:28.395806 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfpxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2mnck_calico-system(f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:28.398878 containerd[1591]: time="2025-11-08T00:21:28.398715415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:21:28.731797 containerd[1591]: time="2025-11-08T00:21:28.731722437Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:28.732668 containerd[1591]: time="2025-11-08T00:21:28.732559064Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:21:28.732668 containerd[1591]: time="2025-11-08T00:21:28.732611534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:21:28.732876 kubelet[2695]: E1108 00:21:28.732832 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:21:28.732951 kubelet[2695]: E1108 00:21:28.732889 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:21:28.733049 kubelet[2695]: E1108 00:21:28.733014 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfpxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2mnck_calico-system(f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:28.734410 kubelet[2695]: E1108 00:21:28.734330 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2mnck" podUID="f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384" Nov 8 00:21:28.860949 kubelet[2695]: E1108 00:21:28.860619 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:28.861902 kubelet[2695]: E1108 00:21:28.861852 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78689fc948-mm7k2" podUID="571339fa-a980-4274-be42-77b940705c5d" Nov 8 00:21:28.864120 kubelet[2695]: E1108 00:21:28.864044 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2mnck" podUID="f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384" Nov 8 00:21:28.911637 systemd-networkd[1222]: calic213aa70661: Gained IPv6LL Nov 8 00:21:34.312052 containerd[1591]: time="2025-11-08T00:21:34.311926598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:21:34.680220 containerd[1591]: time="2025-11-08T00:21:34.680142012Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:34.681162 containerd[1591]: time="2025-11-08T00:21:34.681021079Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:21:34.681162 containerd[1591]: time="2025-11-08T00:21:34.681086735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:21:34.681874 kubelet[2695]: E1108 00:21:34.681512 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:21:34.681874 kubelet[2695]: E1108 00:21:34.681592 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:21:34.681874 kubelet[2695]: E1108 00:21:34.681809 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c0f4f730874f4da2b1ae525f279b9089,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ddv98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74db99b9f5-n8j6t_calico-system(a194daac-f83a-4a21-ba16-72b7bfe8925b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:34.684262 containerd[1591]: time="2025-11-08T00:21:34.684221484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:21:35.022787 containerd[1591]: time="2025-11-08T00:21:35.022611952Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:35.023609 containerd[1591]: time="2025-11-08T00:21:35.023550889Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:21:35.023699 containerd[1591]: time="2025-11-08T00:21:35.023657988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:21:35.023932 kubelet[2695]: E1108 00:21:35.023889 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:21:35.024021 kubelet[2695]: E1108 00:21:35.023946 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:21:35.024097 kubelet[2695]: E1108 00:21:35.024061 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddv98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74db99b9f5-n8j6t_calico-system(a194daac-f83a-4a21-ba16-72b7bfe8925b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:35.025204 kubelet[2695]: E1108 00:21:35.025157 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74db99b9f5-n8j6t" podUID="a194daac-f83a-4a21-ba16-72b7bfe8925b" Nov 8 00:21:36.861657 systemd[1]: Started sshd@7-64.23.144.43:22-139.178.68.195:46558.service - OpenSSH per-connection server daemon (139.178.68.195:46558). Nov 8 00:21:36.959498 sshd[4992]: Accepted publickey for core from 139.178.68.195 port 46558 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:21:36.965130 sshd[4992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:36.977913 systemd-logind[1566]: New session 8 of user core. Nov 8 00:21:36.982904 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:21:37.433509 sshd[4992]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:37.437184 systemd-logind[1566]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:21:37.438135 systemd[1]: sshd@7-64.23.144.43:22-139.178.68.195:46558.service: Deactivated successfully. Nov 8 00:21:37.443292 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:21:37.444687 systemd-logind[1566]: Removed session 8. Nov 8 00:21:38.310303 containerd[1591]: time="2025-11-08T00:21:38.310258632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:21:38.646727 containerd[1591]: time="2025-11-08T00:21:38.646317337Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:38.647202 containerd[1591]: time="2025-11-08T00:21:38.647103940Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:21:38.647309 containerd[1591]: time="2025-11-08T00:21:38.647164076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:38.647417 kubelet[2695]: E1108 00:21:38.647368 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:21:38.648317 kubelet[2695]: E1108 00:21:38.647427 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:21:38.648811 containerd[1591]: time="2025-11-08T00:21:38.647768875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:21:38.649080 kubelet[2695]: E1108 00:21:38.648560 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lbgw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gs456_calico-system(6e197bac-6071-4052-8e5a-3a64d2035a47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:38.649993 kubelet[2695]: E1108 00:21:38.649959 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gs456" podUID="6e197bac-6071-4052-8e5a-3a64d2035a47" Nov 8 00:21:38.980502 containerd[1591]: time="2025-11-08T00:21:38.980407150Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:38.981505 containerd[1591]: time="2025-11-08T00:21:38.981429493Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:21:38.983361 containerd[1591]: time="2025-11-08T00:21:38.981483845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:38.983429 kubelet[2695]: E1108 00:21:38.981769 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:38.983429 kubelet[2695]: E1108 00:21:38.981827 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:38.983429 kubelet[2695]: E1108 00:21:38.981963 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxthx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b65b9d44c-vc5w2_calico-apiserver(e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:38.983793 kubelet[2695]: E1108 00:21:38.983744 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b65b9d44c-vc5w2" podUID="e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5" Nov 8 00:21:39.306867 containerd[1591]: time="2025-11-08T00:21:39.306121082Z" level=info msg="StopPodSandbox for \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\"" Nov 8 00:21:39.419015 containerd[1591]: 2025-11-08 00:21:39.361 [WARNING][5020] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21", Pod:"csi-node-driver-2mnck", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic213aa70661", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:39.419015 containerd[1591]: 2025-11-08 00:21:39.363 [INFO][5020] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Nov 8 00:21:39.419015 containerd[1591]: 2025-11-08 00:21:39.363 [INFO][5020] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" iface="eth0" netns="" Nov 8 00:21:39.419015 containerd[1591]: 2025-11-08 00:21:39.363 [INFO][5020] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Nov 8 00:21:39.419015 containerd[1591]: 2025-11-08 00:21:39.363 [INFO][5020] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Nov 8 00:21:39.419015 containerd[1591]: 2025-11-08 00:21:39.402 [INFO][5028] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" HandleID="k8s-pod-network.edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0" Nov 8 00:21:39.419015 containerd[1591]: 2025-11-08 00:21:39.402 [INFO][5028] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:39.419015 containerd[1591]: 2025-11-08 00:21:39.402 [INFO][5028] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:39.419015 containerd[1591]: 2025-11-08 00:21:39.411 [WARNING][5028] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" HandleID="k8s-pod-network.edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0" Nov 8 00:21:39.419015 containerd[1591]: 2025-11-08 00:21:39.411 [INFO][5028] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" HandleID="k8s-pod-network.edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0" Nov 8 00:21:39.419015 containerd[1591]: 2025-11-08 00:21:39.413 [INFO][5028] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:39.419015 containerd[1591]: 2025-11-08 00:21:39.415 [INFO][5020] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Nov 8 00:21:39.419801 containerd[1591]: time="2025-11-08T00:21:39.419078724Z" level=info msg="TearDown network for sandbox \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\" successfully" Nov 8 00:21:39.419801 containerd[1591]: time="2025-11-08T00:21:39.419117552Z" level=info msg="StopPodSandbox for \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\" returns successfully" Nov 8 00:21:39.420544 containerd[1591]: time="2025-11-08T00:21:39.420385326Z" level=info msg="RemovePodSandbox for \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\"" Nov 8 00:21:39.422622 containerd[1591]: time="2025-11-08T00:21:39.422588249Z" level=info msg="Forcibly stopping sandbox \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\"" Nov 8 00:21:39.508233 containerd[1591]: 2025-11-08 00:21:39.470 [WARNING][5042] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"40408ce8e41c4d54c0ad96f14644ead6901cb68ada60fd140141d3829bfcfe21", Pod:"csi-node-driver-2mnck", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic213aa70661", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:39.508233 containerd[1591]: 2025-11-08 00:21:39.470 [INFO][5042] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Nov 8 00:21:39.508233 containerd[1591]: 2025-11-08 00:21:39.470 [INFO][5042] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" iface="eth0" netns="" Nov 8 00:21:39.508233 containerd[1591]: 2025-11-08 00:21:39.470 [INFO][5042] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Nov 8 00:21:39.508233 containerd[1591]: 2025-11-08 00:21:39.470 [INFO][5042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Nov 8 00:21:39.508233 containerd[1591]: 2025-11-08 00:21:39.494 [INFO][5049] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" HandleID="k8s-pod-network.edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0" Nov 8 00:21:39.508233 containerd[1591]: 2025-11-08 00:21:39.494 [INFO][5049] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:39.508233 containerd[1591]: 2025-11-08 00:21:39.494 [INFO][5049] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:39.508233 containerd[1591]: 2025-11-08 00:21:39.501 [WARNING][5049] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" HandleID="k8s-pod-network.edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0" Nov 8 00:21:39.508233 containerd[1591]: 2025-11-08 00:21:39.501 [INFO][5049] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" HandleID="k8s-pod-network.edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-csi--node--driver--2mnck-eth0" Nov 8 00:21:39.508233 containerd[1591]: 2025-11-08 00:21:39.503 [INFO][5049] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:39.508233 containerd[1591]: 2025-11-08 00:21:39.505 [INFO][5042] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162" Nov 8 00:21:39.508893 containerd[1591]: time="2025-11-08T00:21:39.508443724Z" level=info msg="TearDown network for sandbox \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\" successfully" Nov 8 00:21:39.516095 containerd[1591]: time="2025-11-08T00:21:39.515991392Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:39.516258 containerd[1591]: time="2025-11-08T00:21:39.516122827Z" level=info msg="RemovePodSandbox \"edadaec23322ba51cb554315b68f26a457d31ea5bdd3241ae88f81a419b05162\" returns successfully" Nov 8 00:21:39.516893 containerd[1591]: time="2025-11-08T00:21:39.516866852Z" level=info msg="StopPodSandbox for \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\"" Nov 8 00:21:39.594753 containerd[1591]: 2025-11-08 00:21:39.556 [WARNING][5063] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--6478bcb995--55zjb-eth0" Nov 8 00:21:39.594753 containerd[1591]: 2025-11-08 00:21:39.556 [INFO][5063] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Nov 8 00:21:39.594753 containerd[1591]: 2025-11-08 00:21:39.556 [INFO][5063] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" iface="eth0" netns="" Nov 8 00:21:39.594753 containerd[1591]: 2025-11-08 00:21:39.556 [INFO][5063] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Nov 8 00:21:39.594753 containerd[1591]: 2025-11-08 00:21:39.556 [INFO][5063] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Nov 8 00:21:39.594753 containerd[1591]: 2025-11-08 00:21:39.581 [INFO][5070] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" HandleID="k8s-pod-network.26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--6478bcb995--55zjb-eth0" Nov 8 00:21:39.594753 containerd[1591]: 2025-11-08 00:21:39.581 [INFO][5070] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:39.594753 containerd[1591]: 2025-11-08 00:21:39.581 [INFO][5070] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:39.594753 containerd[1591]: 2025-11-08 00:21:39.588 [WARNING][5070] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" HandleID="k8s-pod-network.26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--6478bcb995--55zjb-eth0" Nov 8 00:21:39.594753 containerd[1591]: 2025-11-08 00:21:39.588 [INFO][5070] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" HandleID="k8s-pod-network.26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--6478bcb995--55zjb-eth0" Nov 8 00:21:39.594753 containerd[1591]: 2025-11-08 00:21:39.590 [INFO][5070] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:39.594753 containerd[1591]: 2025-11-08 00:21:39.592 [INFO][5063] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Nov 8 00:21:39.594753 containerd[1591]: time="2025-11-08T00:21:39.594708228Z" level=info msg="TearDown network for sandbox \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\" successfully" Nov 8 00:21:39.594753 containerd[1591]: time="2025-11-08T00:21:39.594737285Z" level=info msg="StopPodSandbox for \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\" returns successfully" Nov 8 00:21:39.596763 containerd[1591]: time="2025-11-08T00:21:39.596067357Z" level=info msg="RemovePodSandbox for \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\"" Nov 8 00:21:39.596763 containerd[1591]: time="2025-11-08T00:21:39.596247790Z" level=info msg="Forcibly stopping sandbox \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\"" Nov 8 00:21:39.675752 containerd[1591]: 2025-11-08 00:21:39.635 [WARNING][5084] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" WorkloadEndpoint="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--6478bcb995--55zjb-eth0" Nov 8 00:21:39.675752 containerd[1591]: 2025-11-08 00:21:39.635 [INFO][5084] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Nov 8 00:21:39.675752 containerd[1591]: 2025-11-08 00:21:39.635 [INFO][5084] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" iface="eth0" netns="" Nov 8 00:21:39.675752 containerd[1591]: 2025-11-08 00:21:39.635 [INFO][5084] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Nov 8 00:21:39.675752 containerd[1591]: 2025-11-08 00:21:39.635 [INFO][5084] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Nov 8 00:21:39.675752 containerd[1591]: 2025-11-08 00:21:39.661 [INFO][5091] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" HandleID="k8s-pod-network.26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--6478bcb995--55zjb-eth0" Nov 8 00:21:39.675752 containerd[1591]: 2025-11-08 00:21:39.661 [INFO][5091] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:39.675752 containerd[1591]: 2025-11-08 00:21:39.661 [INFO][5091] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:39.675752 containerd[1591]: 2025-11-08 00:21:39.668 [WARNING][5091] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" HandleID="k8s-pod-network.26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--6478bcb995--55zjb-eth0" Nov 8 00:21:39.675752 containerd[1591]: 2025-11-08 00:21:39.668 [INFO][5091] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" HandleID="k8s-pod-network.26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-whisker--6478bcb995--55zjb-eth0" Nov 8 00:21:39.675752 containerd[1591]: 2025-11-08 00:21:39.670 [INFO][5091] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:39.675752 containerd[1591]: 2025-11-08 00:21:39.673 [INFO][5084] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a" Nov 8 00:21:39.676270 containerd[1591]: time="2025-11-08T00:21:39.675865747Z" level=info msg="TearDown network for sandbox \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\" successfully" Nov 8 00:21:39.679913 containerd[1591]: time="2025-11-08T00:21:39.679543314Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:39.679913 containerd[1591]: time="2025-11-08T00:21:39.679628205Z" level=info msg="RemovePodSandbox \"26bb0f5fa2ffc06602d60c42c07984ccb5c34637b943acde1ab3f73437adb33a\" returns successfully" Nov 8 00:21:39.680565 containerd[1591]: time="2025-11-08T00:21:39.680492730Z" level=info msg="StopPodSandbox for \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\"" Nov 8 00:21:39.766582 containerd[1591]: 2025-11-08 00:21:39.722 [WARNING][5105] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0", GenerateName:"calico-apiserver-5b65b9d44c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b65b9d44c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608", Pod:"calico-apiserver-5b65b9d44c-vc5w2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96ead31dd33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:39.766582 containerd[1591]: 2025-11-08 00:21:39.723 [INFO][5105] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Nov 8 00:21:39.766582 containerd[1591]: 2025-11-08 00:21:39.723 [INFO][5105] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" iface="eth0" netns="" Nov 8 00:21:39.766582 containerd[1591]: 2025-11-08 00:21:39.723 [INFO][5105] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Nov 8 00:21:39.766582 containerd[1591]: 2025-11-08 00:21:39.723 [INFO][5105] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Nov 8 00:21:39.766582 containerd[1591]: 2025-11-08 00:21:39.751 [INFO][5112] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" HandleID="k8s-pod-network.a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0" Nov 8 00:21:39.766582 containerd[1591]: 2025-11-08 00:21:39.751 [INFO][5112] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:39.766582 containerd[1591]: 2025-11-08 00:21:39.751 [INFO][5112] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:39.766582 containerd[1591]: 2025-11-08 00:21:39.758 [WARNING][5112] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" HandleID="k8s-pod-network.a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0" Nov 8 00:21:39.766582 containerd[1591]: 2025-11-08 00:21:39.759 [INFO][5112] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" HandleID="k8s-pod-network.a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0" Nov 8 00:21:39.766582 containerd[1591]: 2025-11-08 00:21:39.761 [INFO][5112] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:39.766582 containerd[1591]: 2025-11-08 00:21:39.764 [INFO][5105] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Nov 8 00:21:39.767240 containerd[1591]: time="2025-11-08T00:21:39.766805952Z" level=info msg="TearDown network for sandbox \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\" successfully" Nov 8 00:21:39.767240 containerd[1591]: time="2025-11-08T00:21:39.766836594Z" level=info msg="StopPodSandbox for \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\" returns successfully" Nov 8 00:21:39.767885 containerd[1591]: time="2025-11-08T00:21:39.767832146Z" level=info msg="RemovePodSandbox for \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\"" Nov 8 00:21:39.767885 containerd[1591]: time="2025-11-08T00:21:39.767876452Z" level=info msg="Forcibly stopping sandbox \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\"" Nov 8 00:21:39.881567 containerd[1591]: 2025-11-08 00:21:39.839 [WARNING][5126] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0", GenerateName:"calico-apiserver-5b65b9d44c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b65b9d44c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"4a1f5f0b6bc0e146f6e80eb4e48cc39f8a70566e66fe33b3bbb89bf47c5f4608", Pod:"calico-apiserver-5b65b9d44c-vc5w2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96ead31dd33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:39.881567 containerd[1591]: 2025-11-08 00:21:39.839 [INFO][5126] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Nov 8 00:21:39.881567 containerd[1591]: 2025-11-08 00:21:39.839 [INFO][5126] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" iface="eth0" netns="" Nov 8 00:21:39.881567 containerd[1591]: 2025-11-08 00:21:39.839 [INFO][5126] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Nov 8 00:21:39.881567 containerd[1591]: 2025-11-08 00:21:39.839 [INFO][5126] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Nov 8 00:21:39.881567 containerd[1591]: 2025-11-08 00:21:39.866 [INFO][5133] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" HandleID="k8s-pod-network.a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0" Nov 8 00:21:39.881567 containerd[1591]: 2025-11-08 00:21:39.866 [INFO][5133] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:39.881567 containerd[1591]: 2025-11-08 00:21:39.866 [INFO][5133] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:39.881567 containerd[1591]: 2025-11-08 00:21:39.873 [WARNING][5133] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" HandleID="k8s-pod-network.a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0" Nov 8 00:21:39.881567 containerd[1591]: 2025-11-08 00:21:39.873 [INFO][5133] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" HandleID="k8s-pod-network.a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--vc5w2-eth0" Nov 8 00:21:39.881567 containerd[1591]: 2025-11-08 00:21:39.875 [INFO][5133] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:39.881567 containerd[1591]: 2025-11-08 00:21:39.878 [INFO][5126] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd" Nov 8 00:21:39.881567 containerd[1591]: time="2025-11-08T00:21:39.881378317Z" level=info msg="TearDown network for sandbox \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\" successfully" Nov 8 00:21:39.887847 containerd[1591]: time="2025-11-08T00:21:39.887406500Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:39.887847 containerd[1591]: time="2025-11-08T00:21:39.887505377Z" level=info msg="RemovePodSandbox \"a32aae763f0f6d623e4571a2afd1be509120492aa7bb4d248dda882170c2ddcd\" returns successfully" Nov 8 00:21:39.888122 containerd[1591]: time="2025-11-08T00:21:39.887983196Z" level=info msg="StopPodSandbox for \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\"" Nov 8 00:21:39.990030 containerd[1591]: 2025-11-08 00:21:39.945 [WARNING][5147] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0", GenerateName:"calico-apiserver-5b65b9d44c-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d5058f1-2a34-4b46-bc5b-60d93e86f9f4", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b65b9d44c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90", Pod:"calico-apiserver-5b65b9d44c-ld5bj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali71b8b794de3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:39.990030 containerd[1591]: 2025-11-08 00:21:39.945 [INFO][5147] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Nov 8 00:21:39.990030 containerd[1591]: 2025-11-08 00:21:39.945 [INFO][5147] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" iface="eth0" netns="" Nov 8 00:21:39.990030 containerd[1591]: 2025-11-08 00:21:39.945 [INFO][5147] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Nov 8 00:21:39.990030 containerd[1591]: 2025-11-08 00:21:39.946 [INFO][5147] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Nov 8 00:21:39.990030 containerd[1591]: 2025-11-08 00:21:39.974 [INFO][5155] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" HandleID="k8s-pod-network.c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0" Nov 8 00:21:39.990030 containerd[1591]: 2025-11-08 00:21:39.974 [INFO][5155] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:39.990030 containerd[1591]: 2025-11-08 00:21:39.974 [INFO][5155] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:39.990030 containerd[1591]: 2025-11-08 00:21:39.982 [WARNING][5155] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" HandleID="k8s-pod-network.c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0" Nov 8 00:21:39.990030 containerd[1591]: 2025-11-08 00:21:39.982 [INFO][5155] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" HandleID="k8s-pod-network.c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0" Nov 8 00:21:39.990030 containerd[1591]: 2025-11-08 00:21:39.984 [INFO][5155] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:39.990030 containerd[1591]: 2025-11-08 00:21:39.986 [INFO][5147] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Nov 8 00:21:39.991457 containerd[1591]: time="2025-11-08T00:21:39.990560822Z" level=info msg="TearDown network for sandbox \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\" successfully" Nov 8 00:21:39.991457 containerd[1591]: time="2025-11-08T00:21:39.990605071Z" level=info msg="StopPodSandbox for \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\" returns successfully" Nov 8 00:21:39.991457 containerd[1591]: time="2025-11-08T00:21:39.991120084Z" level=info msg="RemovePodSandbox for \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\"" Nov 8 00:21:39.991457 containerd[1591]: time="2025-11-08T00:21:39.991148476Z" level=info msg="Forcibly stopping sandbox \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\"" Nov 8 00:21:40.080576 containerd[1591]: 2025-11-08 00:21:40.035 [WARNING][5169] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0", GenerateName:"calico-apiserver-5b65b9d44c-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d5058f1-2a34-4b46-bc5b-60d93e86f9f4", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b65b9d44c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"a8e1dde6fdd2cd127aaab51e21404c95f6aeae37b45230489a829b9d13a69e90", Pod:"calico-apiserver-5b65b9d44c-ld5bj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali71b8b794de3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:40.080576 containerd[1591]: 2025-11-08 00:21:40.036 [INFO][5169] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Nov 8 00:21:40.080576 containerd[1591]: 2025-11-08 00:21:40.036 [INFO][5169] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" iface="eth0" netns="" Nov 8 00:21:40.080576 containerd[1591]: 2025-11-08 00:21:40.036 [INFO][5169] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Nov 8 00:21:40.080576 containerd[1591]: 2025-11-08 00:21:40.036 [INFO][5169] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Nov 8 00:21:40.080576 containerd[1591]: 2025-11-08 00:21:40.063 [INFO][5176] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" HandleID="k8s-pod-network.c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0" Nov 8 00:21:40.080576 containerd[1591]: 2025-11-08 00:21:40.064 [INFO][5176] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:40.080576 containerd[1591]: 2025-11-08 00:21:40.064 [INFO][5176] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:40.080576 containerd[1591]: 2025-11-08 00:21:40.073 [WARNING][5176] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" HandleID="k8s-pod-network.c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0" Nov 8 00:21:40.080576 containerd[1591]: 2025-11-08 00:21:40.073 [INFO][5176] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" HandleID="k8s-pod-network.c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--apiserver--5b65b9d44c--ld5bj-eth0" Nov 8 00:21:40.080576 containerd[1591]: 2025-11-08 00:21:40.075 [INFO][5176] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:40.080576 containerd[1591]: 2025-11-08 00:21:40.077 [INFO][5169] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0" Nov 8 00:21:40.081833 containerd[1591]: time="2025-11-08T00:21:40.080627832Z" level=info msg="TearDown network for sandbox \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\" successfully" Nov 8 00:21:40.086294 containerd[1591]: time="2025-11-08T00:21:40.086228063Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:40.087289 containerd[1591]: time="2025-11-08T00:21:40.086337995Z" level=info msg="RemovePodSandbox \"c64a5c58f57df30cc65dfeceea64b3abb158a4db1d30a1729da6d3f0e5697ae0\" returns successfully" Nov 8 00:21:40.088182 containerd[1591]: time="2025-11-08T00:21:40.087740289Z" level=info msg="StopPodSandbox for \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\"" Nov 8 00:21:40.191118 containerd[1591]: 2025-11-08 00:21:40.136 [WARNING][5190] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5b85782e-ef51-43c2-92d6-7721ec39bac1", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613", Pod:"coredns-668d6bf9bc-cbp7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9f4b25b1300", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:40.191118 containerd[1591]: 2025-11-08 00:21:40.137 [INFO][5190] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Nov 8 00:21:40.191118 containerd[1591]: 2025-11-08 00:21:40.137 [INFO][5190] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" iface="eth0" netns="" Nov 8 00:21:40.191118 containerd[1591]: 2025-11-08 00:21:40.137 [INFO][5190] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Nov 8 00:21:40.191118 containerd[1591]: 2025-11-08 00:21:40.137 [INFO][5190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Nov 8 00:21:40.191118 containerd[1591]: 2025-11-08 00:21:40.173 [INFO][5197] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" HandleID="k8s-pod-network.c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0" Nov 8 00:21:40.191118 containerd[1591]: 2025-11-08 00:21:40.173 [INFO][5197] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:40.191118 containerd[1591]: 2025-11-08 00:21:40.173 [INFO][5197] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:40.191118 containerd[1591]: 2025-11-08 00:21:40.182 [WARNING][5197] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" HandleID="k8s-pod-network.c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0" Nov 8 00:21:40.191118 containerd[1591]: 2025-11-08 00:21:40.182 [INFO][5197] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" HandleID="k8s-pod-network.c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0" Nov 8 00:21:40.191118 containerd[1591]: 2025-11-08 00:21:40.185 [INFO][5197] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:40.191118 containerd[1591]: 2025-11-08 00:21:40.188 [INFO][5190] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Nov 8 00:21:40.192886 containerd[1591]: time="2025-11-08T00:21:40.191155387Z" level=info msg="TearDown network for sandbox \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\" successfully" Nov 8 00:21:40.192886 containerd[1591]: time="2025-11-08T00:21:40.191185665Z" level=info msg="StopPodSandbox for \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\" returns successfully" Nov 8 00:21:40.192886 containerd[1591]: time="2025-11-08T00:21:40.191780227Z" level=info msg="RemovePodSandbox for \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\"" Nov 8 00:21:40.192886 containerd[1591]: time="2025-11-08T00:21:40.191810018Z" level=info msg="Forcibly stopping sandbox \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\"" Nov 8 00:21:40.302533 containerd[1591]: 2025-11-08 00:21:40.244 [WARNING][5211] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5b85782e-ef51-43c2-92d6-7721ec39bac1", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"e1f29b92260f72cf612b90f2fd4021b75b762a1ed3c50ac5a15a87d91e10e613", Pod:"coredns-668d6bf9bc-cbp7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9f4b25b1300", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:40.302533 containerd[1591]: 2025-11-08 00:21:40.245 [INFO][5211] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Nov 8 00:21:40.302533 containerd[1591]: 2025-11-08 00:21:40.245 [INFO][5211] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" iface="eth0" netns="" Nov 8 00:21:40.302533 containerd[1591]: 2025-11-08 00:21:40.245 [INFO][5211] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Nov 8 00:21:40.302533 containerd[1591]: 2025-11-08 00:21:40.245 [INFO][5211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Nov 8 00:21:40.302533 containerd[1591]: 2025-11-08 00:21:40.283 [INFO][5218] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" HandleID="k8s-pod-network.c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0" Nov 8 00:21:40.302533 containerd[1591]: 2025-11-08 00:21:40.283 [INFO][5218] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:40.302533 containerd[1591]: 2025-11-08 00:21:40.283 [INFO][5218] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:40.302533 containerd[1591]: 2025-11-08 00:21:40.292 [WARNING][5218] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" HandleID="k8s-pod-network.c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0" Nov 8 00:21:40.302533 containerd[1591]: 2025-11-08 00:21:40.292 [INFO][5218] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" HandleID="k8s-pod-network.c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--cbp7m-eth0" Nov 8 00:21:40.302533 containerd[1591]: 2025-11-08 00:21:40.294 [INFO][5218] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:40.302533 containerd[1591]: 2025-11-08 00:21:40.297 [INFO][5211] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092" Nov 8 00:21:40.302533 containerd[1591]: time="2025-11-08T00:21:40.300247377Z" level=info msg="TearDown network for sandbox \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\" successfully" Nov 8 00:21:40.304405 containerd[1591]: time="2025-11-08T00:21:40.304352225Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:40.304722 containerd[1591]: time="2025-11-08T00:21:40.304696870Z" level=info msg="RemovePodSandbox \"c1c8a1a777e67e18520b69db97495d600136d359afcf8585be1fa939eb770092\" returns successfully" Nov 8 00:21:40.305740 containerd[1591]: time="2025-11-08T00:21:40.305677401Z" level=info msg="StopPodSandbox for \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\"" Nov 8 00:21:40.410631 containerd[1591]: 2025-11-08 00:21:40.361 [WARNING][5232] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0", GenerateName:"calico-kube-controllers-78689fc948-", Namespace:"calico-system", SelfLink:"", UID:"571339fa-a980-4274-be42-77b940705c5d", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78689fc948", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a", Pod:"calico-kube-controllers-78689fc948-mm7k2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7715faeb73d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:40.410631 containerd[1591]: 2025-11-08 00:21:40.361 [INFO][5232] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Nov 8 00:21:40.410631 containerd[1591]: 2025-11-08 00:21:40.361 [INFO][5232] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" iface="eth0" netns="" Nov 8 00:21:40.410631 containerd[1591]: 2025-11-08 00:21:40.361 [INFO][5232] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Nov 8 00:21:40.410631 containerd[1591]: 2025-11-08 00:21:40.361 [INFO][5232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Nov 8 00:21:40.410631 containerd[1591]: 2025-11-08 00:21:40.394 [INFO][5239] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" HandleID="k8s-pod-network.e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0" Nov 8 00:21:40.410631 containerd[1591]: 2025-11-08 00:21:40.395 [INFO][5239] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:40.410631 containerd[1591]: 2025-11-08 00:21:40.395 [INFO][5239] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:40.410631 containerd[1591]: 2025-11-08 00:21:40.403 [WARNING][5239] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" HandleID="k8s-pod-network.e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0" Nov 8 00:21:40.410631 containerd[1591]: 2025-11-08 00:21:40.403 [INFO][5239] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" HandleID="k8s-pod-network.e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0" Nov 8 00:21:40.410631 containerd[1591]: 2025-11-08 00:21:40.405 [INFO][5239] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:40.410631 containerd[1591]: 2025-11-08 00:21:40.408 [INFO][5232] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Nov 8 00:21:40.411858 containerd[1591]: time="2025-11-08T00:21:40.410689125Z" level=info msg="TearDown network for sandbox \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\" successfully" Nov 8 00:21:40.411858 containerd[1591]: time="2025-11-08T00:21:40.410733834Z" level=info msg="StopPodSandbox for \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\" returns successfully" Nov 8 00:21:40.411858 containerd[1591]: time="2025-11-08T00:21:40.411345586Z" level=info msg="RemovePodSandbox for \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\"" Nov 8 00:21:40.411858 containerd[1591]: time="2025-11-08T00:21:40.411377434Z" level=info msg="Forcibly stopping sandbox \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\"" Nov 8 00:21:40.512509 containerd[1591]: 2025-11-08 00:21:40.458 [WARNING][5253] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0", GenerateName:"calico-kube-controllers-78689fc948-", Namespace:"calico-system", SelfLink:"", UID:"571339fa-a980-4274-be42-77b940705c5d", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78689fc948", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"8af455b46cb8045e90f1f6acfaaf25dbd966c5b4f4363e6ed82eed99a80af64a", Pod:"calico-kube-controllers-78689fc948-mm7k2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7715faeb73d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:40.512509 containerd[1591]: 2025-11-08 00:21:40.459 [INFO][5253] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Nov 8 00:21:40.512509 containerd[1591]: 2025-11-08 00:21:40.459 [INFO][5253] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" iface="eth0" netns="" Nov 8 00:21:40.512509 containerd[1591]: 2025-11-08 00:21:40.459 [INFO][5253] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Nov 8 00:21:40.512509 containerd[1591]: 2025-11-08 00:21:40.459 [INFO][5253] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Nov 8 00:21:40.512509 containerd[1591]: 2025-11-08 00:21:40.494 [INFO][5260] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" HandleID="k8s-pod-network.e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0" Nov 8 00:21:40.512509 containerd[1591]: 2025-11-08 00:21:40.494 [INFO][5260] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:40.512509 containerd[1591]: 2025-11-08 00:21:40.494 [INFO][5260] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:40.512509 containerd[1591]: 2025-11-08 00:21:40.502 [WARNING][5260] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" HandleID="k8s-pod-network.e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0" Nov 8 00:21:40.512509 containerd[1591]: 2025-11-08 00:21:40.502 [INFO][5260] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" HandleID="k8s-pod-network.e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-calico--kube--controllers--78689fc948--mm7k2-eth0" Nov 8 00:21:40.512509 containerd[1591]: 2025-11-08 00:21:40.505 [INFO][5260] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:40.512509 containerd[1591]: 2025-11-08 00:21:40.508 [INFO][5253] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945" Nov 8 00:21:40.512509 containerd[1591]: time="2025-11-08T00:21:40.511434592Z" level=info msg="TearDown network for sandbox \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\" successfully" Nov 8 00:21:40.518689 containerd[1591]: time="2025-11-08T00:21:40.517913236Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:40.518689 containerd[1591]: time="2025-11-08T00:21:40.518004630Z" level=info msg="RemovePodSandbox \"e79ccee906a844687c4b3ebf53a1128683e95c8a9ea966bbfbc3f662cc4f9945\" returns successfully" Nov 8 00:21:40.518689 containerd[1591]: time="2025-11-08T00:21:40.518596600Z" level=info msg="StopPodSandbox for \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\"" Nov 8 00:21:40.629920 containerd[1591]: 2025-11-08 00:21:40.570 [WARNING][5274] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6e197bac-6071-4052-8e5a-3a64d2035a47", ResourceVersion:"1129", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d", Pod:"goldmane-666569f655-gs456", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1c08bba9206", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:40.629920 containerd[1591]: 2025-11-08 00:21:40.571 [INFO][5274] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Nov 8 00:21:40.629920 containerd[1591]: 2025-11-08 00:21:40.571 [INFO][5274] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" iface="eth0" netns="" Nov 8 00:21:40.629920 containerd[1591]: 2025-11-08 00:21:40.571 [INFO][5274] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Nov 8 00:21:40.629920 containerd[1591]: 2025-11-08 00:21:40.571 [INFO][5274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Nov 8 00:21:40.629920 containerd[1591]: 2025-11-08 00:21:40.613 [INFO][5281] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" HandleID="k8s-pod-network.4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0" Nov 8 00:21:40.629920 containerd[1591]: 2025-11-08 00:21:40.613 [INFO][5281] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:40.629920 containerd[1591]: 2025-11-08 00:21:40.614 [INFO][5281] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:40.629920 containerd[1591]: 2025-11-08 00:21:40.622 [WARNING][5281] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" HandleID="k8s-pod-network.4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0" Nov 8 00:21:40.629920 containerd[1591]: 2025-11-08 00:21:40.622 [INFO][5281] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" HandleID="k8s-pod-network.4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0" Nov 8 00:21:40.629920 containerd[1591]: 2025-11-08 00:21:40.624 [INFO][5281] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:40.629920 containerd[1591]: 2025-11-08 00:21:40.627 [INFO][5274] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Nov 8 00:21:40.630742 containerd[1591]: time="2025-11-08T00:21:40.629987957Z" level=info msg="TearDown network for sandbox \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\" successfully" Nov 8 00:21:40.630742 containerd[1591]: time="2025-11-08T00:21:40.630028537Z" level=info msg="StopPodSandbox for \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\" returns successfully" Nov 8 00:21:40.630742 containerd[1591]: time="2025-11-08T00:21:40.630717527Z" level=info msg="RemovePodSandbox for \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\"" Nov 8 00:21:40.630891 containerd[1591]: time="2025-11-08T00:21:40.630747723Z" level=info msg="Forcibly stopping sandbox \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\"" Nov 8 00:21:40.737843 containerd[1591]: 2025-11-08 00:21:40.694 [WARNING][5295] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6e197bac-6071-4052-8e5a-3a64d2035a47", ResourceVersion:"1129", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"6a187713ef84a17a8f625da1c54810137a5c027240ac203e4fc3995cdb14be6d", Pod:"goldmane-666569f655-gs456", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1c08bba9206", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:40.737843 containerd[1591]: 2025-11-08 00:21:40.695 [INFO][5295] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Nov 8 00:21:40.737843 containerd[1591]: 2025-11-08 00:21:40.695 [INFO][5295] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" iface="eth0" netns="" Nov 8 00:21:40.737843 containerd[1591]: 2025-11-08 00:21:40.695 [INFO][5295] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Nov 8 00:21:40.737843 containerd[1591]: 2025-11-08 00:21:40.695 [INFO][5295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Nov 8 00:21:40.737843 containerd[1591]: 2025-11-08 00:21:40.722 [INFO][5302] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" HandleID="k8s-pod-network.4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0" Nov 8 00:21:40.737843 containerd[1591]: 2025-11-08 00:21:40.722 [INFO][5302] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:40.737843 containerd[1591]: 2025-11-08 00:21:40.722 [INFO][5302] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:40.737843 containerd[1591]: 2025-11-08 00:21:40.730 [WARNING][5302] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" HandleID="k8s-pod-network.4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0" Nov 8 00:21:40.737843 containerd[1591]: 2025-11-08 00:21:40.730 [INFO][5302] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" HandleID="k8s-pod-network.4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-goldmane--666569f655--gs456-eth0" Nov 8 00:21:40.737843 containerd[1591]: 2025-11-08 00:21:40.732 [INFO][5302] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:40.737843 containerd[1591]: 2025-11-08 00:21:40.735 [INFO][5295] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214" Nov 8 00:21:40.738402 containerd[1591]: time="2025-11-08T00:21:40.737912751Z" level=info msg="TearDown network for sandbox \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\" successfully" Nov 8 00:21:40.741365 containerd[1591]: time="2025-11-08T00:21:40.741171638Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:40.741365 containerd[1591]: time="2025-11-08T00:21:40.741246629Z" level=info msg="RemovePodSandbox \"4c43a2f2a7364b17f8ffdaad6f80e8012773f1d18a50c18ae72ac7c932722214\" returns successfully" Nov 8 00:21:40.741946 containerd[1591]: time="2025-11-08T00:21:40.741913944Z" level=info msg="StopPodSandbox for \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\"" Nov 8 00:21:40.847113 containerd[1591]: 2025-11-08 00:21:40.792 [WARNING][5316] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"aed9a615-02c1-40d6-81ad-65033e8e154c", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb", Pod:"coredns-668d6bf9bc-5blmv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa7ce56622c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:40.847113 containerd[1591]: 2025-11-08 00:21:40.793 [INFO][5316] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Nov 8 00:21:40.847113 containerd[1591]: 2025-11-08 00:21:40.793 [INFO][5316] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" iface="eth0" netns="" Nov 8 00:21:40.847113 containerd[1591]: 2025-11-08 00:21:40.793 [INFO][5316] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Nov 8 00:21:40.847113 containerd[1591]: 2025-11-08 00:21:40.793 [INFO][5316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Nov 8 00:21:40.847113 containerd[1591]: 2025-11-08 00:21:40.825 [INFO][5323] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" HandleID="k8s-pod-network.1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0" Nov 8 00:21:40.847113 containerd[1591]: 2025-11-08 00:21:40.825 [INFO][5323] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:40.847113 containerd[1591]: 2025-11-08 00:21:40.825 [INFO][5323] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:40.847113 containerd[1591]: 2025-11-08 00:21:40.837 [WARNING][5323] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" HandleID="k8s-pod-network.1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0" Nov 8 00:21:40.847113 containerd[1591]: 2025-11-08 00:21:40.837 [INFO][5323] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" HandleID="k8s-pod-network.1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0" Nov 8 00:21:40.847113 containerd[1591]: 2025-11-08 00:21:40.839 [INFO][5323] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:40.847113 containerd[1591]: 2025-11-08 00:21:40.843 [INFO][5316] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Nov 8 00:21:40.847113 containerd[1591]: time="2025-11-08T00:21:40.847084012Z" level=info msg="TearDown network for sandbox \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\" successfully" Nov 8 00:21:40.848917 containerd[1591]: time="2025-11-08T00:21:40.847128196Z" level=info msg="StopPodSandbox for \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\" returns successfully" Nov 8 00:21:40.848917 containerd[1591]: time="2025-11-08T00:21:40.848659426Z" level=info msg="RemovePodSandbox for \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\"" Nov 8 00:21:40.848917 containerd[1591]: time="2025-11-08T00:21:40.848691959Z" level=info msg="Forcibly stopping sandbox \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\"" Nov 8 00:21:40.940551 containerd[1591]: 2025-11-08 00:21:40.893 [WARNING][5337] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"aed9a615-02c1-40d6-81ad-65033e8e154c", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-01b3a4b0a8", ContainerID:"f2fb9fadacda311882813868a95c5332cfe0fe052ba1cb4917395100545bf7bb", Pod:"coredns-668d6bf9bc-5blmv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa7ce56622c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:40.940551 containerd[1591]: 2025-11-08 00:21:40.893 [INFO][5337] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Nov 8 00:21:40.940551 containerd[1591]: 2025-11-08 00:21:40.893 [INFO][5337] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" iface="eth0" netns="" Nov 8 00:21:40.940551 containerd[1591]: 2025-11-08 00:21:40.893 [INFO][5337] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Nov 8 00:21:40.940551 containerd[1591]: 2025-11-08 00:21:40.893 [INFO][5337] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Nov 8 00:21:40.940551 containerd[1591]: 2025-11-08 00:21:40.925 [INFO][5344] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" HandleID="k8s-pod-network.1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0" Nov 8 00:21:40.940551 containerd[1591]: 2025-11-08 00:21:40.925 [INFO][5344] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:40.940551 containerd[1591]: 2025-11-08 00:21:40.925 [INFO][5344] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:40.940551 containerd[1591]: 2025-11-08 00:21:40.932 [WARNING][5344] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" HandleID="k8s-pod-network.1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0" Nov 8 00:21:40.940551 containerd[1591]: 2025-11-08 00:21:40.933 [INFO][5344] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" HandleID="k8s-pod-network.1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Workload="ci--4081.3.6--n--01b3a4b0a8-k8s-coredns--668d6bf9bc--5blmv-eth0" Nov 8 00:21:40.940551 containerd[1591]: 2025-11-08 00:21:40.934 [INFO][5344] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:40.940551 containerd[1591]: 2025-11-08 00:21:40.937 [INFO][5337] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d" Nov 8 00:21:40.940551 containerd[1591]: time="2025-11-08T00:21:40.939770185Z" level=info msg="TearDown network for sandbox \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\" successfully" Nov 8 00:21:40.945660 containerd[1591]: time="2025-11-08T00:21:40.945599044Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:40.945882 containerd[1591]: time="2025-11-08T00:21:40.945674180Z" level=info msg="RemovePodSandbox \"1fc7e5a07ccae5a0c1f401724ff9c6af80c3cfc018e0d419f58da5ebf1df517d\" returns successfully" Nov 8 00:21:41.310862 containerd[1591]: time="2025-11-08T00:21:41.310811731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:21:41.680928 containerd[1591]: time="2025-11-08T00:21:41.680722817Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:41.682024 containerd[1591]: time="2025-11-08T00:21:41.681879454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:21:41.682024 containerd[1591]: time="2025-11-08T00:21:41.681945280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:41.682205 kubelet[2695]: E1108 00:21:41.682145 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:41.682660 kubelet[2695]: E1108 00:21:41.682206 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:41.682660 kubelet[2695]: E1108 00:21:41.682353 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vz5wp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b65b9d44c-ld5bj_calico-apiserver(8d5058f1-2a34-4b46-bc5b-60d93e86f9f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:41.683947 kubelet[2695]: E1108 00:21:41.683892 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b65b9d44c-ld5bj" podUID="8d5058f1-2a34-4b46-bc5b-60d93e86f9f4" Nov 8 00:21:42.309478 containerd[1591]: time="2025-11-08T00:21:42.309364624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:21:42.444917 systemd[1]: Started sshd@8-64.23.144.43:22-139.178.68.195:46560.service - OpenSSH per-connection server daemon (139.178.68.195:46560). Nov 8 00:21:42.522380 sshd[5352]: Accepted publickey for core from 139.178.68.195 port 46560 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:21:42.523555 sshd[5352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:42.529254 systemd-logind[1566]: New session 9 of user core. Nov 8 00:21:42.535893 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:21:42.707841 sshd[5352]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:42.711536 systemd[1]: sshd@8-64.23.144.43:22-139.178.68.195:46560.service: Deactivated successfully. Nov 8 00:21:42.718059 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:21:42.720641 systemd-logind[1566]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:21:42.722109 systemd-logind[1566]: Removed session 9. Nov 8 00:21:42.775995 containerd[1591]: time="2025-11-08T00:21:42.775770993Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:42.776957 containerd[1591]: time="2025-11-08T00:21:42.776829369Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:21:42.776957 containerd[1591]: time="2025-11-08T00:21:42.776890866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:21:42.777344 kubelet[2695]: E1108 00:21:42.777283 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:21:42.777736 kubelet[2695]: E1108 00:21:42.777363 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:21:42.777736 kubelet[2695]: E1108 00:21:42.777550 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfpxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2mnck_calico-system(f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:42.781130 containerd[1591]: time="2025-11-08T00:21:42.780833525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:21:43.122453 containerd[1591]: time="2025-11-08T00:21:43.122310994Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:43.123445 containerd[1591]: time="2025-11-08T00:21:43.123327348Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:21:43.123445 containerd[1591]: time="2025-11-08T00:21:43.123390867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:21:43.123611 kubelet[2695]: E1108 00:21:43.123574 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:21:43.123657 kubelet[2695]: E1108 00:21:43.123626 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:21:43.123794 kubelet[2695]: E1108 00:21:43.123751 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfpxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2mnck_calico-system(f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:43.125046 kubelet[2695]: E1108 00:21:43.124909 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2mnck" podUID="f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384" Nov 8 00:21:44.309600 containerd[1591]: time="2025-11-08T00:21:44.309558962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:21:44.669857 containerd[1591]: time="2025-11-08T00:21:44.669644750Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:44.670911 containerd[1591]: time="2025-11-08T00:21:44.670441403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:21:44.671148 containerd[1591]: time="2025-11-08T00:21:44.670513747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:21:44.671372 kubelet[2695]: E1108 00:21:44.671306 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:21:44.671873 kubelet[2695]: E1108 00:21:44.671388 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:21:44.671873 kubelet[2695]: E1108 00:21:44.671617 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95h7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78689fc948-mm7k2_calico-system(571339fa-a980-4274-be42-77b940705c5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:44.673455 kubelet[2695]: E1108 00:21:44.673412 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78689fc948-mm7k2" podUID="571339fa-a980-4274-be42-77b940705c5d" Nov 8 00:21:46.145910 systemd[1]: Started sshd@9-64.23.144.43:22-140.233.190.96:52850.service - OpenSSH per-connection server daemon (140.233.190.96:52850). Nov 8 00:21:47.717789 systemd[1]: Started sshd@10-64.23.144.43:22-139.178.68.195:35122.service - OpenSSH per-connection server daemon (139.178.68.195:35122). Nov 8 00:21:47.761782 sshd[5378]: Accepted publickey for core from 139.178.68.195 port 35122 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:21:47.763491 sshd[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:47.768588 systemd-logind[1566]: New session 10 of user core. Nov 8 00:21:47.773864 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:21:47.911286 sshd[5378]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:47.915627 systemd[1]: sshd@10-64.23.144.43:22-139.178.68.195:35122.service: Deactivated successfully. Nov 8 00:21:47.922238 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:21:47.923584 systemd-logind[1566]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:21:47.924717 systemd-logind[1566]: Removed session 10. Nov 8 00:21:48.311534 kubelet[2695]: E1108 00:21:48.311446 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74db99b9f5-n8j6t" podUID="a194daac-f83a-4a21-ba16-72b7bfe8925b" Nov 8 00:21:49.310138 kubelet[2695]: E1108 00:21:49.310080 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gs456" podUID="6e197bac-6071-4052-8e5a-3a64d2035a47" Nov 8 00:21:52.312001 kubelet[2695]: E1108 00:21:52.310765 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b65b9d44c-ld5bj" podUID="8d5058f1-2a34-4b46-bc5b-60d93e86f9f4" Nov 8 00:21:52.313955 kubelet[2695]: E1108 00:21:52.313530 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b65b9d44c-vc5w2" podUID="e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5" Nov 8 00:21:52.832533 kubelet[2695]: E1108 00:21:52.832108 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:52.923917 systemd[1]: Started sshd@11-64.23.144.43:22-139.178.68.195:35132.service - OpenSSH per-connection server daemon (139.178.68.195:35132). Nov 8 00:21:52.971621 sshd[5413]: Accepted publickey for core from 139.178.68.195 port 35132 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:21:52.973998 sshd[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:52.981095 systemd-logind[1566]: New session 11 of user core. Nov 8 00:21:52.989150 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:21:53.166105 sshd[5413]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:53.175880 systemd[1]: Started sshd@12-64.23.144.43:22-139.178.68.195:56208.service - OpenSSH per-connection server daemon (139.178.68.195:56208). Nov 8 00:21:53.177132 systemd[1]: sshd@11-64.23.144.43:22-139.178.68.195:35132.service: Deactivated successfully. Nov 8 00:21:53.185806 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:21:53.188682 systemd-logind[1566]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:21:53.190607 systemd-logind[1566]: Removed session 11. Nov 8 00:21:53.246542 sshd[5424]: Accepted publickey for core from 139.178.68.195 port 56208 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:21:53.248759 sshd[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:53.253901 systemd-logind[1566]: New session 12 of user core. Nov 8 00:21:53.256942 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:21:53.464152 sshd[5424]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:53.483037 systemd[1]: Started sshd@13-64.23.144.43:22-139.178.68.195:56218.service - OpenSSH per-connection server daemon (139.178.68.195:56218). Nov 8 00:21:53.483560 systemd[1]: sshd@12-64.23.144.43:22-139.178.68.195:56208.service: Deactivated successfully. Nov 8 00:21:53.500439 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:21:53.505848 systemd-logind[1566]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:21:53.510324 systemd-logind[1566]: Removed session 12. Nov 8 00:21:53.553242 sshd[5436]: Accepted publickey for core from 139.178.68.195 port 56218 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:21:53.555098 sshd[5436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:53.561029 systemd-logind[1566]: New session 13 of user core. Nov 8 00:21:53.564786 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:21:53.716438 sshd[5436]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:53.723005 systemd[1]: sshd@13-64.23.144.43:22-139.178.68.195:56218.service: Deactivated successfully. Nov 8 00:21:53.725056 systemd-logind[1566]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:21:53.727111 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:21:53.729416 systemd-logind[1566]: Removed session 13. Nov 8 00:21:55.313838 kubelet[2695]: E1108 00:21:55.313478 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:55.314660 kubelet[2695]: E1108 00:21:55.314302 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78689fc948-mm7k2" podUID="571339fa-a980-4274-be42-77b940705c5d" Nov 8 00:21:56.313593 kubelet[2695]: E1108 00:21:56.313509 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2mnck" podUID="f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384" Nov 8 00:21:58.727854 systemd[1]: Started sshd@14-64.23.144.43:22-139.178.68.195:56222.service - OpenSSH per-connection server daemon (139.178.68.195:56222). Nov 8 00:21:58.778386 sshd[5457]: Accepted publickey for core from 139.178.68.195 port 56222 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:21:58.780968 sshd[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:58.789048 systemd-logind[1566]: New session 14 of user core. Nov 8 00:21:58.793981 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:21:58.947220 sshd[5457]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:58.952351 systemd[1]: sshd@14-64.23.144.43:22-139.178.68.195:56222.service: Deactivated successfully. Nov 8 00:21:58.955953 systemd-logind[1566]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:21:58.955978 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:21:58.958519 systemd-logind[1566]: Removed session 14. Nov 8 00:22:00.310288 containerd[1591]: time="2025-11-08T00:22:00.309923525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:22:00.677621 containerd[1591]: time="2025-11-08T00:22:00.677509957Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:00.678657 containerd[1591]: time="2025-11-08T00:22:00.678592573Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:22:00.678792 containerd[1591]: time="2025-11-08T00:22:00.678744309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:22:00.679013 kubelet[2695]: E1108 00:22:00.678957 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:22:00.679822 kubelet[2695]: E1108 00:22:00.679036 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:22:00.679822 kubelet[2695]: E1108 00:22:00.679183 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lbgw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gs456_calico-system(6e197bac-6071-4052-8e5a-3a64d2035a47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:00.681259 kubelet[2695]: E1108 00:22:00.681119 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gs456" podUID="6e197bac-6071-4052-8e5a-3a64d2035a47" Nov 8 00:22:00.720870 sshd[5375]: Connection closed by authenticating user root 140.233.190.96 port 52850 [preauth] Nov 8 00:22:00.723884 systemd[1]: sshd@9-64.23.144.43:22-140.233.190.96:52850.service: Deactivated successfully. Nov 8 00:22:02.310499 containerd[1591]: time="2025-11-08T00:22:02.310295327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:22:02.327117 kubelet[2695]: E1108 00:22:02.325029 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:02.672717 containerd[1591]: time="2025-11-08T00:22:02.672638971Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:02.675773 containerd[1591]: time="2025-11-08T00:22:02.675265506Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:22:02.675773 containerd[1591]: time="2025-11-08T00:22:02.675369814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:22:02.676649 kubelet[2695]: E1108 00:22:02.676035 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:22:02.676649 kubelet[2695]: E1108 00:22:02.676098 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:22:02.676649 kubelet[2695]: E1108 00:22:02.676218 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c0f4f730874f4da2b1ae525f279b9089,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ddv98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74db99b9f5-n8j6t_calico-system(a194daac-f83a-4a21-ba16-72b7bfe8925b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:02.680018 containerd[1591]: time="2025-11-08T00:22:02.679893529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:22:03.008439 containerd[1591]: time="2025-11-08T00:22:03.008103588Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:03.009742 containerd[1591]: time="2025-11-08T00:22:03.009587679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:22:03.009742 containerd[1591]: time="2025-11-08T00:22:03.009663584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:22:03.010109 kubelet[2695]: E1108 00:22:03.009875 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:22:03.010109 kubelet[2695]: E1108 00:22:03.009946 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:22:03.010808 kubelet[2695]: E1108 00:22:03.010670 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddv98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74db99b9f5-n8j6t_calico-system(a194daac-f83a-4a21-ba16-72b7bfe8925b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:03.012777 kubelet[2695]: E1108 00:22:03.012283 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74db99b9f5-n8j6t" podUID="a194daac-f83a-4a21-ba16-72b7bfe8925b" Nov 8 00:22:03.312607 containerd[1591]: time="2025-11-08T00:22:03.311146519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:22:03.654866 containerd[1591]: time="2025-11-08T00:22:03.654789508Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:03.655785 containerd[1591]: time="2025-11-08T00:22:03.655683352Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:22:03.655985 containerd[1591]: time="2025-11-08T00:22:03.655718610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:22:03.656218 kubelet[2695]: E1108 00:22:03.656155 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:03.656851 kubelet[2695]: E1108 00:22:03.656234 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:03.656851 kubelet[2695]: E1108 00:22:03.656453 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vz5wp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b65b9d44c-ld5bj_calico-apiserver(8d5058f1-2a34-4b46-bc5b-60d93e86f9f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:03.657869 kubelet[2695]: E1108 00:22:03.657794 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b65b9d44c-ld5bj" podUID="8d5058f1-2a34-4b46-bc5b-60d93e86f9f4" Nov 8 00:22:03.959509 systemd[1]: Started sshd@15-64.23.144.43:22-139.178.68.195:54268.service - OpenSSH per-connection server daemon (139.178.68.195:54268). Nov 8 00:22:04.071205 sshd[5480]: Accepted publickey for core from 139.178.68.195 port 54268 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:04.072837 sshd[5480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:04.083839 systemd-logind[1566]: New session 15 of user core. Nov 8 00:22:04.089402 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:22:04.284700 sshd[5480]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:04.291541 systemd[1]: sshd@15-64.23.144.43:22-139.178.68.195:54268.service: Deactivated successfully. Nov 8 00:22:04.301202 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:22:04.302188 systemd-logind[1566]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:22:04.303324 systemd-logind[1566]: Removed session 15. Nov 8 00:22:05.310020 kubelet[2695]: E1108 00:22:05.308501 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:05.310020 kubelet[2695]: E1108 00:22:05.309284 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:07.312117 containerd[1591]: time="2025-11-08T00:22:07.311902989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:22:07.657664 containerd[1591]: time="2025-11-08T00:22:07.657589000Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:07.658362 containerd[1591]: time="2025-11-08T00:22:07.658314270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:22:07.658547 containerd[1591]: time="2025-11-08T00:22:07.658337226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:22:07.658644 kubelet[2695]: E1108 00:22:07.658600 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:07.659140 kubelet[2695]: E1108 00:22:07.658664 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:07.659140 kubelet[2695]: E1108 00:22:07.658826 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxthx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b65b9d44c-vc5w2_calico-apiserver(e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:07.660483 kubelet[2695]: E1108 00:22:07.660428 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b65b9d44c-vc5w2" podUID="e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5" Nov 8 00:22:08.314796 containerd[1591]: time="2025-11-08T00:22:08.314749889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:22:08.651248 containerd[1591]: time="2025-11-08T00:22:08.651103973Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:08.652071 containerd[1591]: time="2025-11-08T00:22:08.652021751Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:22:08.652236 containerd[1591]: time="2025-11-08T00:22:08.652124440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:22:08.652794 kubelet[2695]: E1108 00:22:08.652378 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:22:08.652794 kubelet[2695]: E1108 00:22:08.652437 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:22:08.652794 kubelet[2695]: E1108 00:22:08.652687 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95h7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78689fc948-mm7k2_calico-system(571339fa-a980-4274-be42-77b940705c5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:08.654617 kubelet[2695]: E1108 00:22:08.654568 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78689fc948-mm7k2" podUID="571339fa-a980-4274-be42-77b940705c5d" Nov 8 00:22:09.291878 systemd[1]: Started sshd@16-64.23.144.43:22-139.178.68.195:54272.service - OpenSSH per-connection server daemon (139.178.68.195:54272). Nov 8 00:22:09.314321 containerd[1591]: time="2025-11-08T00:22:09.313981396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:22:09.339138 sshd[5496]: Accepted publickey for core from 139.178.68.195 port 54272 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:09.340329 sshd[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:09.345236 systemd-logind[1566]: New session 16 of user core. Nov 8 00:22:09.352865 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:22:09.511137 sshd[5496]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:09.518701 systemd[1]: sshd@16-64.23.144.43:22-139.178.68.195:54272.service: Deactivated successfully. Nov 8 00:22:09.523396 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:22:09.525070 systemd-logind[1566]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:22:09.526312 systemd-logind[1566]: Removed session 16. Nov 8 00:22:09.655048 containerd[1591]: time="2025-11-08T00:22:09.654973167Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:09.655979 containerd[1591]: time="2025-11-08T00:22:09.655902859Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:22:09.656192 containerd[1591]: time="2025-11-08T00:22:09.655960305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:22:09.656371 kubelet[2695]: E1108 00:22:09.656294 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:22:09.657128 kubelet[2695]: E1108 00:22:09.656382 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:22:09.657128 kubelet[2695]: E1108 00:22:09.656523 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfpxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2mnck_calico-system(f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:09.659495 containerd[1591]: time="2025-11-08T00:22:09.659307442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:22:09.986637 containerd[1591]: time="2025-11-08T00:22:09.986276423Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:09.987451 containerd[1591]: time="2025-11-08T00:22:09.987355501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:22:09.987702 containerd[1591]: time="2025-11-08T00:22:09.987424614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:22:09.987950 kubelet[2695]: E1108 00:22:09.987888 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:22:09.988115 kubelet[2695]: E1108 00:22:09.987972 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:22:09.989661 kubelet[2695]: E1108 00:22:09.988240 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfpxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2mnck_calico-system(f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:09.989661 kubelet[2695]: E1108 00:22:09.989531 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2mnck" podUID="f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384" Nov 8 00:22:13.309863 kubelet[2695]: E1108 00:22:13.308851 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gs456" podUID="6e197bac-6071-4052-8e5a-3a64d2035a47" Nov 8 00:22:14.521025 systemd[1]: Started sshd@17-64.23.144.43:22-139.178.68.195:44316.service - OpenSSH per-connection server daemon (139.178.68.195:44316). Nov 8 00:22:14.565003 sshd[5510]: Accepted publickey for core from 139.178.68.195 port 44316 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:14.566904 sshd[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:14.572347 systemd-logind[1566]: New session 17 of user core. Nov 8 00:22:14.580840 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:22:14.749010 sshd[5510]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:14.764975 systemd[1]: Started sshd@18-64.23.144.43:22-139.178.68.195:44332.service - OpenSSH per-connection server daemon (139.178.68.195:44332). Nov 8 00:22:14.766228 systemd[1]: sshd@17-64.23.144.43:22-139.178.68.195:44316.service: Deactivated successfully. Nov 8 00:22:14.772536 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:22:14.785284 systemd-logind[1566]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:22:14.789204 systemd-logind[1566]: Removed session 17. Nov 8 00:22:14.845492 sshd[5522]: Accepted publickey for core from 139.178.68.195 port 44332 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:14.847938 sshd[5522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:14.855668 systemd-logind[1566]: New session 18 of user core. Nov 8 00:22:14.866132 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:22:15.259738 sshd[5522]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:15.270812 systemd[1]: Started sshd@19-64.23.144.43:22-139.178.68.195:44336.service - OpenSSH per-connection server daemon (139.178.68.195:44336). Nov 8 00:22:15.275563 systemd[1]: sshd@18-64.23.144.43:22-139.178.68.195:44332.service: Deactivated successfully. Nov 8 00:22:15.278842 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:22:15.279947 systemd-logind[1566]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:22:15.282662 systemd-logind[1566]: Removed session 18. Nov 8 00:22:15.316222 kubelet[2695]: E1108 00:22:15.314802 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74db99b9f5-n8j6t" podUID="a194daac-f83a-4a21-ba16-72b7bfe8925b" Nov 8 00:22:15.354787 sshd[5533]: Accepted publickey for core from 139.178.68.195 port 44336 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:15.356656 sshd[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:15.362661 systemd-logind[1566]: New session 19 of user core. Nov 8 00:22:15.368437 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:22:16.010004 sshd[5533]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:16.027328 systemd[1]: Started sshd@20-64.23.144.43:22-139.178.68.195:44350.service - OpenSSH per-connection server daemon (139.178.68.195:44350). Nov 8 00:22:16.028165 systemd[1]: sshd@19-64.23.144.43:22-139.178.68.195:44336.service: Deactivated successfully. Nov 8 00:22:16.038151 systemd-logind[1566]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:22:16.040247 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:22:16.044262 systemd-logind[1566]: Removed session 19. Nov 8 00:22:16.107327 sshd[5552]: Accepted publickey for core from 139.178.68.195 port 44350 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:16.110118 sshd[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:16.117346 systemd-logind[1566]: New session 20 of user core. Nov 8 00:22:16.122836 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:22:16.609097 sshd[5552]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:16.621967 systemd[1]: Started sshd@21-64.23.144.43:22-139.178.68.195:44360.service - OpenSSH per-connection server daemon (139.178.68.195:44360). Nov 8 00:22:16.626963 systemd[1]: sshd@20-64.23.144.43:22-139.178.68.195:44350.service: Deactivated successfully. Nov 8 00:22:16.637077 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:22:16.638522 systemd-logind[1566]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:22:16.640381 systemd-logind[1566]: Removed session 20. Nov 8 00:22:16.679945 sshd[5566]: Accepted publickey for core from 139.178.68.195 port 44360 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:16.682647 sshd[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:16.691095 systemd-logind[1566]: New session 21 of user core. Nov 8 00:22:16.695038 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:22:16.890752 sshd[5566]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:16.896291 systemd[1]: sshd@21-64.23.144.43:22-139.178.68.195:44360.service: Deactivated successfully. Nov 8 00:22:16.896695 systemd-logind[1566]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:22:16.903583 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:22:16.905562 systemd-logind[1566]: Removed session 21. Nov 8 00:22:17.310142 kubelet[2695]: E1108 00:22:17.309541 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b65b9d44c-ld5bj" podUID="8d5058f1-2a34-4b46-bc5b-60d93e86f9f4" Nov 8 00:22:21.310122 kubelet[2695]: E1108 00:22:21.309079 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b65b9d44c-vc5w2" podUID="e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5" Nov 8 00:22:21.904106 systemd[1]: Started sshd@22-64.23.144.43:22-139.178.68.195:44364.service - OpenSSH per-connection server daemon (139.178.68.195:44364). Nov 8 00:22:21.956018 sshd[5583]: Accepted publickey for core from 139.178.68.195 port 44364 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:21.958375 sshd[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:21.965393 systemd-logind[1566]: New session 22 of user core. Nov 8 00:22:21.971974 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:22:22.128698 sshd[5583]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:22.132275 systemd[1]: sshd@22-64.23.144.43:22-139.178.68.195:44364.service: Deactivated successfully. Nov 8 00:22:22.139981 systemd-logind[1566]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:22:22.141801 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:22:22.143114 systemd-logind[1566]: Removed session 22. Nov 8 00:22:22.309152 kubelet[2695]: E1108 00:22:22.308939 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78689fc948-mm7k2" podUID="571339fa-a980-4274-be42-77b940705c5d" Nov 8 00:22:22.311041 kubelet[2695]: E1108 00:22:22.310961 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2mnck" podUID="f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384" Nov 8 00:22:27.138044 systemd[1]: Started sshd@23-64.23.144.43:22-139.178.68.195:55334.service - OpenSSH per-connection server daemon (139.178.68.195:55334). Nov 8 00:22:27.236876 sshd[5623]: Accepted publickey for core from 139.178.68.195 port 55334 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:27.241822 sshd[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:27.259246 systemd-logind[1566]: New session 23 of user core. Nov 8 00:22:27.260902 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:22:27.312048 kubelet[2695]: E1108 00:22:27.310896 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gs456" podUID="6e197bac-6071-4052-8e5a-3a64d2035a47" Nov 8 00:22:27.550731 sshd[5623]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:27.561935 systemd[1]: sshd@23-64.23.144.43:22-139.178.68.195:55334.service: Deactivated successfully. Nov 8 00:22:27.571175 systemd-logind[1566]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:22:27.572813 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:22:27.575020 systemd-logind[1566]: Removed session 23. Nov 8 00:22:29.311317 kubelet[2695]: E1108 00:22:29.310599 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b65b9d44c-ld5bj" podUID="8d5058f1-2a34-4b46-bc5b-60d93e86f9f4" Nov 8 00:22:29.313156 kubelet[2695]: E1108 00:22:29.312581 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74db99b9f5-n8j6t" podUID="a194daac-f83a-4a21-ba16-72b7bfe8925b" Nov 8 00:22:32.563795 systemd[1]: Started sshd@24-64.23.144.43:22-139.178.68.195:55346.service - OpenSSH per-connection server daemon (139.178.68.195:55346). Nov 8 00:22:32.613720 sshd[5638]: Accepted publickey for core from 139.178.68.195 port 55346 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:32.615695 sshd[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:32.621413 systemd-logind[1566]: New session 24 of user core. Nov 8 00:22:32.625844 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:22:32.831750 sshd[5638]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:32.837940 systemd-logind[1566]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:22:32.838213 systemd[1]: sshd@24-64.23.144.43:22-139.178.68.195:55346.service: Deactivated successfully. Nov 8 00:22:32.844108 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:22:32.845953 systemd-logind[1566]: Removed session 24. Nov 8 00:22:34.310130 kubelet[2695]: E1108 00:22:34.309846 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b65b9d44c-vc5w2" podUID="e7d3f242-9c3a-4bcc-93ef-b5ab42ced5a5" Nov 8 00:22:35.313776 kubelet[2695]: E1108 00:22:35.313729 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2mnck" podUID="f2f9c81e-0f3a-4b51-8c60-b0e2a35eb384" Nov 8 00:22:36.308811 kubelet[2695]: E1108 00:22:36.308272 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:36.310207 kubelet[2695]: E1108 00:22:36.310085 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78689fc948-mm7k2" podUID="571339fa-a980-4274-be42-77b940705c5d" Nov 8 00:22:37.847643 systemd[1]: Started sshd@25-64.23.144.43:22-139.178.68.195:53464.service - OpenSSH per-connection server daemon (139.178.68.195:53464). Nov 8 00:22:37.943650 sshd[5652]: Accepted publickey for core from 139.178.68.195 port 53464 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:37.943412 sshd[5652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:37.954967 systemd-logind[1566]: New session 25 of user core. Nov 8 00:22:37.962841 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:22:38.393921 sshd[5652]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:38.398093 systemd[1]: sshd@25-64.23.144.43:22-139.178.68.195:53464.service: Deactivated successfully. Nov 8 00:22:38.402279 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:22:38.403326 systemd-logind[1566]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:22:38.404892 systemd-logind[1566]: Removed session 25. Nov 8 00:22:39.310626 kubelet[2695]: E1108 00:22:39.310155 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:40.311779 kubelet[2695]: E1108 00:22:40.311710 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gs456" podUID="6e197bac-6071-4052-8e5a-3a64d2035a47" Nov 8 00:22:40.312418 kubelet[2695]: E1108 00:22:40.311875 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74db99b9f5-n8j6t" podUID="a194daac-f83a-4a21-ba16-72b7bfe8925b"