Nov 8 00:21:01.990752 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:21:01.990793 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:21:01.990812 kernel: BIOS-provided physical RAM map: Nov 8 00:21:01.990820 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 8 00:21:01.990826 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 8 00:21:01.990833 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 8 00:21:01.990842 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 8 00:21:01.990861 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 8 00:21:01.990868 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 8 00:21:01.990878 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 8 00:21:01.990885 kernel: NX (Execute Disable) protection: active Nov 8 00:21:01.990892 kernel: APIC: Static calls initialized Nov 8 00:21:01.990906 kernel: SMBIOS 2.8 present. Nov 8 00:21:01.990913 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 8 00:21:01.990922 kernel: Hypervisor detected: KVM Nov 8 00:21:01.990934 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:21:01.990946 kernel: kvm-clock: using sched offset of 3222962698 cycles Nov 8 00:21:01.990955 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:21:01.990963 kernel: tsc: Detected 2494.140 MHz processor Nov 8 00:21:01.990971 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:21:01.991001 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:21:01.991010 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 8 00:21:01.991018 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 8 00:21:01.991026 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:21:01.991038 kernel: ACPI: Early table checksum verification disabled Nov 8 00:21:01.991046 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 8 00:21:01.991055 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:21:01.991063 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:21:01.991071 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:21:01.991079 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 8 00:21:01.991087 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:21:01.991095 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:21:01.991103 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:21:01.991114 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:21:01.991122 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 8 00:21:01.991130 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 8 00:21:01.991137 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 8 00:21:01.991145 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 8 00:21:01.991153 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 8 00:21:01.991161 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 8 00:21:01.991174 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 8 00:21:01.991186 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:21:01.991194 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 8 00:21:01.991203 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 8 00:21:01.991211 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 8 00:21:01.991223 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Nov 8 00:21:01.991232 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Nov 8 00:21:01.991244 kernel: Zone ranges: Nov 8 00:21:01.991252 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:21:01.991260 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 8 00:21:01.991269 kernel: Normal empty Nov 8 00:21:01.991277 kernel: Movable zone start for each node Nov 8 00:21:01.991286 kernel: Early memory node ranges Nov 8 00:21:01.991294 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 8 00:21:01.991303 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 8 00:21:01.991311 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 8 00:21:01.991339 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:21:01.991347 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 8 00:21:01.991358 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 8 00:21:01.991366 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:21:01.991375 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:21:01.991384 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:21:01.991392 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:21:01.991400 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:21:01.991409 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:21:01.991421 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:21:01.991430 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:21:01.991438 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:21:01.991447 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:21:01.991455 kernel: TSC deadline timer available Nov 8 00:21:01.991464 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:21:01.991472 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:21:01.991481 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 8 00:21:01.991492 kernel: Booting paravirtualized kernel on KVM Nov 8 00:21:01.991501 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:21:01.991512 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:21:01.991521 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:21:01.991530 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:21:01.991538 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:21:01.991546 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 8 00:21:01.991556 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:21:01.991565 kernel: random: crng init done Nov 8 00:21:01.991573 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:21:01.991585 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:21:01.991594 kernel: Fallback order for Node 0: 0 Nov 8 00:21:01.991602 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Nov 8 00:21:01.991610 kernel: Policy zone: DMA32 Nov 8 00:21:01.991619 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:21:01.991628 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 125148K reserved, 0K cma-reserved) Nov 8 00:21:01.991636 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:21:01.991645 kernel: Kernel/User page tables isolation: enabled Nov 8 00:21:01.991653 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:21:01.991665 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:21:01.991674 kernel: Dynamic Preempt: voluntary Nov 8 00:21:01.991682 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:21:01.991692 kernel: rcu: RCU event tracing is enabled. Nov 8 00:21:01.991700 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:21:01.991709 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:21:01.991718 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:21:01.991726 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:21:01.991735 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:21:01.991747 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:21:01.991756 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 8 00:21:01.991764 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:21:01.991773 kernel: Console: colour VGA+ 80x25 Nov 8 00:21:01.991784 kernel: printk: console [tty0] enabled Nov 8 00:21:01.991793 kernel: printk: console [ttyS0] enabled Nov 8 00:21:01.991801 kernel: ACPI: Core revision 20230628 Nov 8 00:21:01.991810 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:21:01.991819 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:21:01.991832 kernel: x2apic enabled Nov 8 00:21:01.991840 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:21:01.991849 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:21:01.991857 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Nov 8 00:21:01.991866 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Nov 8 00:21:01.991874 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 8 00:21:01.991883 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 8 00:21:01.991892 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:21:01.991914 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:21:01.991924 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:21:01.991932 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 8 00:21:01.991947 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:21:01.991962 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:21:01.992115 kernel: MDS: Mitigation: Clear CPU buffers Nov 8 00:21:01.992130 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:21:01.992145 kernel: active return thunk: its_return_thunk Nov 8 00:21:01.992172 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:21:01.992191 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:21:01.992204 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:21:01.992252 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:21:01.992264 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:21:01.992277 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 8 00:21:01.992292 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:21:01.992305 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:21:01.992320 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:21:01.992339 kernel: landlock: Up and running. Nov 8 00:21:01.992353 kernel: SELinux: Initializing. Nov 8 00:21:01.992368 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:21:01.992382 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:21:01.992395 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 8 00:21:01.992408 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:21:01.992421 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:21:01.992434 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:21:01.992446 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 8 00:21:01.992464 kernel: signal: max sigframe size: 1776 Nov 8 00:21:01.992477 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:21:01.992490 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:21:01.992502 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:21:01.992515 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:21:01.992543 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:21:01.992556 kernel: .... node #0, CPUs: #1 Nov 8 00:21:01.992567 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:21:01.992580 kernel: smpboot: Max logical packages: 1 Nov 8 00:21:01.992598 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Nov 8 00:21:01.992615 kernel: devtmpfs: initialized Nov 8 00:21:01.992642 kernel: x86/mm: Memory block size: 128MB Nov 8 00:21:01.992657 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:21:01.992670 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:21:01.992684 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:21:01.992696 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:21:01.992709 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:21:01.992723 kernel: audit: type=2000 audit(1762561260.897:1): state=initialized audit_enabled=0 res=1 Nov 8 00:21:01.992741 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:21:01.992750 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:21:01.992760 kernel: cpuidle: using governor menu Nov 8 00:21:01.992769 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:21:01.992778 kernel: dca service started, version 1.12.1 Nov 8 00:21:01.992787 kernel: PCI: Using configuration type 1 for base access Nov 8 00:21:01.992796 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:21:01.992806 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:21:01.992815 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:21:01.992827 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:21:01.992836 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:21:01.992846 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:21:01.992855 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:21:01.992864 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:21:01.992873 kernel: ACPI: Interpreter enabled Nov 8 00:21:01.992882 kernel: ACPI: PM: (supports S0 S5) Nov 8 00:21:01.992891 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:21:01.992901 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:21:01.992913 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:21:01.992923 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 8 00:21:01.992932 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:21:01.993219 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:21:01.993336 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 8 00:21:01.993506 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 8 00:21:01.993521 kernel: acpiphp: Slot [3] registered Nov 8 00:21:01.993539 kernel: acpiphp: Slot [4] registered Nov 8 00:21:01.993549 kernel: acpiphp: Slot [5] registered Nov 8 00:21:01.993558 kernel: acpiphp: Slot [6] registered Nov 8 00:21:01.993567 kernel: acpiphp: Slot [7] registered Nov 8 00:21:01.993576 kernel: acpiphp: Slot [8] registered Nov 8 00:21:01.993585 kernel: acpiphp: Slot [9] registered Nov 8 00:21:01.993594 kernel: acpiphp: Slot [10] registered Nov 8 00:21:01.993604 kernel: acpiphp: Slot [11] registered Nov 8 00:21:01.993613 kernel: acpiphp: Slot [12] registered Nov 8 00:21:01.993622 kernel: acpiphp: Slot [13] registered Nov 8 00:21:01.993634 kernel: acpiphp: Slot [14] registered Nov 8 00:21:01.993643 kernel: acpiphp: Slot [15] registered Nov 8 00:21:01.993652 kernel: acpiphp: Slot [16] registered Nov 8 00:21:01.993661 kernel: acpiphp: Slot [17] registered Nov 8 00:21:01.993670 kernel: acpiphp: Slot [18] registered Nov 8 00:21:01.993679 kernel: acpiphp: Slot [19] registered Nov 8 00:21:01.993688 kernel: acpiphp: Slot [20] registered Nov 8 00:21:01.993697 kernel: acpiphp: Slot [21] registered Nov 8 00:21:01.993706 kernel: acpiphp: Slot [22] registered Nov 8 00:21:01.993718 kernel: acpiphp: Slot [23] registered Nov 8 00:21:01.993727 kernel: acpiphp: Slot [24] registered Nov 8 00:21:01.993737 kernel: acpiphp: Slot [25] registered Nov 8 00:21:01.993746 kernel: acpiphp: Slot [26] registered Nov 8 00:21:01.993754 kernel: acpiphp: Slot [27] registered Nov 8 00:21:01.993769 kernel: acpiphp: Slot [28] registered Nov 8 00:21:01.993782 kernel: acpiphp: Slot [29] registered Nov 8 00:21:01.993794 kernel: acpiphp: Slot [30] registered Nov 8 00:21:01.993807 kernel: acpiphp: Slot [31] registered Nov 8 00:21:01.993820 kernel: PCI host bridge to bus 0000:00 Nov 8 00:21:01.993969 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:21:01.994088 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:21:01.994183 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:21:01.994274 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 8 00:21:01.994367 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 8 00:21:01.994460 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:21:01.994611 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 8 00:21:01.994730 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 8 00:21:01.994846 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Nov 8 00:21:01.994953 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Nov 8 00:21:01.995071 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 8 00:21:01.995224 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 8 00:21:01.995376 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 8 00:21:01.995499 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 8 00:21:01.995620 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Nov 8 00:21:01.995722 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Nov 8 00:21:01.995840 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 8 00:21:01.995960 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 8 00:21:01.996100 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 8 00:21:01.996256 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Nov 8 00:21:01.996387 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Nov 8 00:21:01.996542 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Nov 8 00:21:01.996706 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Nov 8 00:21:01.996811 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Nov 8 00:21:01.996911 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:21:01.998022 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:21:01.998146 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Nov 8 00:21:01.998245 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Nov 8 00:21:01.998344 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Nov 8 00:21:01.998454 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:21:01.998554 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Nov 8 00:21:01.998653 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Nov 8 00:21:01.998751 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 8 00:21:02.000158 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Nov 8 00:21:02.000292 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Nov 8 00:21:02.000396 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Nov 8 00:21:02.000496 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 8 00:21:02.000611 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Nov 8 00:21:02.000752 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Nov 8 00:21:02.000887 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Nov 8 00:21:02.001017 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Nov 8 00:21:02.002300 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Nov 8 00:21:02.002434 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Nov 8 00:21:02.002573 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Nov 8 00:21:02.002713 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Nov 8 00:21:02.002884 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Nov 8 00:21:02.004596 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Nov 8 00:21:02.004874 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 8 00:21:02.004903 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:21:02.004916 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:21:02.004926 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:21:02.004935 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:21:02.004944 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 8 00:21:02.004954 kernel: iommu: Default domain type: Translated Nov 8 00:21:02.004970 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:21:02.005046 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:21:02.005055 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:21:02.005064 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 8 00:21:02.005073 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 8 00:21:02.005201 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 8 00:21:02.005388 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 8 00:21:02.005494 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:21:02.005512 kernel: vgaarb: loaded Nov 8 00:21:02.005522 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:21:02.005532 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:21:02.005541 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:21:02.005550 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:21:02.005559 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:21:02.005568 kernel: pnp: PnP ACPI init Nov 8 00:21:02.005578 kernel: pnp: PnP ACPI: found 4 devices Nov 8 00:21:02.005587 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:21:02.005596 kernel: NET: Registered PF_INET protocol family Nov 8 00:21:02.005608 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:21:02.005617 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 8 00:21:02.005627 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:21:02.005636 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:21:02.005645 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:21:02.005654 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 8 00:21:02.005663 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:21:02.005671 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:21:02.005684 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:21:02.005693 kernel: NET: Registered PF_XDP protocol family Nov 8 00:21:02.005802 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:21:02.005909 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:21:02.006006 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:21:02.006094 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 8 00:21:02.006183 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 8 00:21:02.006291 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 8 00:21:02.006449 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 8 00:21:02.006465 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 8 00:21:02.006570 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 40487 usecs Nov 8 00:21:02.006583 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:21:02.006592 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:21:02.006602 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Nov 8 00:21:02.006611 kernel: Initialise system trusted keyrings Nov 8 00:21:02.006621 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 8 00:21:02.006630 kernel: Key type asymmetric registered Nov 8 00:21:02.006644 kernel: Asymmetric key parser 'x509' registered Nov 8 00:21:02.006654 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:21:02.006663 kernel: io scheduler mq-deadline registered Nov 8 00:21:02.006673 kernel: io scheduler kyber registered Nov 8 00:21:02.006682 kernel: io scheduler bfq registered Nov 8 00:21:02.006691 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:21:02.006700 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 8 00:21:02.006708 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 8 00:21:02.006718 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 8 00:21:02.006730 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:21:02.006739 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:21:02.006749 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:21:02.006758 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:21:02.006767 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:21:02.006776 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:21:02.006921 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 8 00:21:02.010196 kernel: rtc_cmos 00:03: registered as rtc0 Nov 8 00:21:02.010333 kernel: rtc_cmos 00:03: setting system clock to 2025-11-08T00:21:01 UTC (1762561261) Nov 8 00:21:02.010427 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 8 00:21:02.010440 kernel: intel_pstate: CPU model not supported Nov 8 00:21:02.010449 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:21:02.010459 kernel: Segment Routing with IPv6 Nov 8 00:21:02.010468 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:21:02.010477 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:21:02.010486 kernel: Key type dns_resolver registered Nov 8 00:21:02.010496 kernel: IPI shorthand broadcast: enabled Nov 8 00:21:02.010509 kernel: sched_clock: Marking stable (1095002970, 140211876)->(1260701111, -25486265) Nov 8 00:21:02.010518 kernel: registered taskstats version 1 Nov 8 00:21:02.010528 kernel: Loading compiled-in X.509 certificates Nov 8 00:21:02.010537 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:21:02.010545 kernel: Key type .fscrypt registered Nov 8 00:21:02.010554 kernel: Key type fscrypt-provisioning registered Nov 8 00:21:02.010563 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:21:02.010572 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:21:02.010581 kernel: ima: No architecture policies found Nov 8 00:21:02.010594 kernel: clk: Disabling unused clocks Nov 8 00:21:02.010603 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:21:02.010612 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:21:02.010621 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:21:02.010650 kernel: Run /init as init process Nov 8 00:21:02.010664 kernel: with arguments: Nov 8 00:21:02.010673 kernel: /init Nov 8 00:21:02.010683 kernel: with environment: Nov 8 00:21:02.010692 kernel: HOME=/ Nov 8 00:21:02.010705 kernel: TERM=linux Nov 8 00:21:02.010717 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:21:02.010729 systemd[1]: Detected virtualization kvm. Nov 8 00:21:02.010739 systemd[1]: Detected architecture x86-64. Nov 8 00:21:02.010749 systemd[1]: Running in initrd. Nov 8 00:21:02.010758 systemd[1]: No hostname configured, using default hostname. Nov 8 00:21:02.010768 systemd[1]: Hostname set to . Nov 8 00:21:02.010781 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:21:02.010791 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:21:02.010801 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:21:02.010811 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:21:02.010822 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:21:02.010832 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:21:02.010842 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:21:02.010852 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:21:02.010866 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:21:02.010876 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:21:02.010886 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:21:02.010896 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:21:02.010906 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:21:02.010917 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:21:02.010927 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:21:02.010940 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:21:02.010949 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:21:02.010959 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:21:02.010969 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:21:02.010991 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:21:02.011005 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:21:02.011015 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:21:02.011025 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:21:02.011035 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:21:02.011044 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:21:02.011054 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:21:02.011064 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:21:02.011074 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:21:02.011084 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:21:02.011098 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:21:02.011108 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:21:02.011145 systemd-journald[185]: Collecting audit messages is disabled. Nov 8 00:21:02.011172 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:21:02.011185 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:21:02.011195 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:21:02.011207 systemd-journald[185]: Journal started Nov 8 00:21:02.011231 systemd-journald[185]: Runtime Journal (/run/log/journal/1ef3bb0b2d724e9aa8ea35d66f6f337d) is 4.9M, max 39.3M, 34.4M free. Nov 8 00:21:02.019041 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:21:02.006057 systemd-modules-load[186]: Inserted module 'overlay' Nov 8 00:21:02.023310 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:21:02.045999 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:21:02.048003 kernel: Bridge firewalling registered Nov 8 00:21:02.048111 systemd-modules-load[186]: Inserted module 'br_netfilter' Nov 8 00:21:02.087775 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:21:02.088787 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:21:02.090034 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:21:02.104291 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:21:02.111324 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:21:02.116237 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:21:02.118515 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:21:02.142074 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:21:02.147417 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:21:02.157439 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:21:02.160910 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:21:02.164349 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:21:02.175365 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:21:02.186034 dracut-cmdline[215]: dracut-dracut-053 Nov 8 00:21:02.195235 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:21:02.223362 systemd-resolved[218]: Positive Trust Anchors: Nov 8 00:21:02.223383 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:21:02.223430 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:21:02.226504 systemd-resolved[218]: Defaulting to hostname 'linux'. Nov 8 00:21:02.227771 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:21:02.228554 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:21:02.315018 kernel: SCSI subsystem initialized Nov 8 00:21:02.326028 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:21:02.338459 kernel: iscsi: registered transport (tcp) Nov 8 00:21:02.363117 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:21:02.363194 kernel: QLogic iSCSI HBA Driver Nov 8 00:21:02.419097 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:21:02.426271 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:21:02.475819 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:21:02.475917 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:21:02.478106 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:21:02.527026 kernel: raid6: avx2x4 gen() 14979 MB/s Nov 8 00:21:02.544042 kernel: raid6: avx2x2 gen() 17342 MB/s Nov 8 00:21:02.561267 kernel: raid6: avx2x1 gen() 13023 MB/s Nov 8 00:21:02.561361 kernel: raid6: using algorithm avx2x2 gen() 17342 MB/s Nov 8 00:21:02.579203 kernel: raid6: .... xor() 20999 MB/s, rmw enabled Nov 8 00:21:02.579312 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:21:02.602014 kernel: xor: automatically using best checksumming function avx Nov 8 00:21:02.765242 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:21:02.777995 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:21:02.784306 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:21:02.802271 systemd-udevd[402]: Using default interface naming scheme 'v255'. Nov 8 00:21:02.807413 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:21:02.816405 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:21:02.837120 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Nov 8 00:21:02.875292 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:21:02.882226 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:21:02.940720 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:21:02.949203 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:21:02.966620 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:21:02.969633 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:21:02.970314 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:21:02.972597 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:21:02.978170 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:21:03.009629 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:21:03.017286 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 8 00:21:03.031383 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 8 00:21:03.047872 kernel: scsi host0: Virtio SCSI HBA Nov 8 00:21:03.050023 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:21:03.054117 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:21:03.054171 kernel: GPT:9289727 != 125829119 Nov 8 00:21:03.056570 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:21:03.056645 kernel: GPT:9289727 != 125829119 Nov 8 00:21:03.058588 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:21:03.058629 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:21:03.088360 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:21:03.092026 kernel: AES CTR mode by8 optimization enabled Nov 8 00:21:03.105172 kernel: libata version 3.00 loaded. Nov 8 00:21:03.112005 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 8 00:21:03.115010 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 8 00:21:03.120250 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:21:03.120493 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:21:03.151104 kernel: ACPI: bus type USB registered Nov 8 00:21:03.151142 kernel: scsi host1: ata_piix Nov 8 00:21:03.151350 kernel: usbcore: registered new interface driver usbfs Nov 8 00:21:03.151364 kernel: usbcore: registered new interface driver hub Nov 8 00:21:03.151376 kernel: usbcore: registered new device driver usb Nov 8 00:21:03.151387 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 8 00:21:03.151515 kernel: scsi host2: ata_piix Nov 8 00:21:03.151709 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Nov 8 00:21:03.151726 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Nov 8 00:21:03.126323 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:21:03.127039 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:21:03.127249 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:21:03.129188 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:21:03.158196 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:21:03.260216 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:21:03.268333 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:21:03.305399 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:21:03.346008 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 8 00:21:03.346336 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (455) Nov 8 00:21:03.340505 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 8 00:21:03.355965 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 8 00:21:03.356188 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (453) Nov 8 00:21:03.356203 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 8 00:21:03.356333 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 8 00:21:03.356451 kernel: hub 1-0:1.0: USB hub found Nov 8 00:21:03.360006 kernel: hub 1-0:1.0: 2 ports detected Nov 8 00:21:03.371949 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 8 00:21:03.376852 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:21:03.381259 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 8 00:21:03.381826 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 8 00:21:03.393280 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:21:03.399460 disk-uuid[551]: Primary Header is updated. Nov 8 00:21:03.399460 disk-uuid[551]: Secondary Entries is updated. Nov 8 00:21:03.399460 disk-uuid[551]: Secondary Header is updated. Nov 8 00:21:03.412022 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:21:03.420004 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:21:03.438035 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:21:04.429015 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:21:04.429087 disk-uuid[552]: The operation has completed successfully. Nov 8 00:21:04.481644 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:21:04.481888 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:21:04.500251 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:21:04.505239 sh[565]: Success Nov 8 00:21:04.522010 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:21:04.596057 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:21:04.604139 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:21:04.608663 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:21:04.635202 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:21:04.635283 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:21:04.638386 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:21:04.638478 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:21:04.639815 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:21:04.651072 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:21:04.652211 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:21:04.658269 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:21:04.662418 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:21:04.683610 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:21:04.683735 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:21:04.685314 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:21:04.695018 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:21:04.712019 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:21:04.710710 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:21:04.723599 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:21:04.733358 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:21:04.851823 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:21:04.861392 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:21:04.893830 ignition[660]: Ignition 2.19.0 Nov 8 00:21:04.893847 ignition[660]: Stage: fetch-offline Nov 8 00:21:04.897320 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:21:04.893901 ignition[660]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:21:04.893914 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:21:04.894151 ignition[660]: parsed url from cmdline: "" Nov 8 00:21:04.894158 ignition[660]: no config URL provided Nov 8 00:21:04.894168 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:21:04.894184 ignition[660]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:21:04.894194 ignition[660]: failed to fetch config: resource requires networking Nov 8 00:21:04.894508 ignition[660]: Ignition finished successfully Nov 8 00:21:04.909643 systemd-networkd[747]: lo: Link UP Nov 8 00:21:04.909649 systemd-networkd[747]: lo: Gained carrier Nov 8 00:21:04.913207 systemd-networkd[747]: Enumeration completed Nov 8 00:21:04.913800 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 8 00:21:04.913806 systemd-networkd[747]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 8 00:21:04.914988 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:21:04.915119 systemd-networkd[747]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:21:04.915124 systemd-networkd[747]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:21:04.915609 systemd[1]: Reached target network.target - Network. Nov 8 00:21:04.916194 systemd-networkd[747]: eth0: Link UP Nov 8 00:21:04.916199 systemd-networkd[747]: eth0: Gained carrier Nov 8 00:21:04.916211 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 8 00:21:04.922891 systemd-networkd[747]: eth1: Link UP Nov 8 00:21:04.922895 systemd-networkd[747]: eth1: Gained carrier Nov 8 00:21:04.922910 systemd-networkd[747]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:21:04.923220 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:21:04.936077 systemd-networkd[747]: eth1: DHCPv4 address 10.124.0.30/20 acquired from 169.254.169.253 Nov 8 00:21:04.940091 systemd-networkd[747]: eth0: DHCPv4 address 24.199.105.232/20, gateway 24.199.96.1 acquired from 169.254.169.253 Nov 8 00:21:04.955637 ignition[755]: Ignition 2.19.0 Nov 8 00:21:04.955650 ignition[755]: Stage: fetch Nov 8 00:21:04.955873 ignition[755]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:21:04.955885 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:21:04.956061 ignition[755]: parsed url from cmdline: "" Nov 8 00:21:04.956065 ignition[755]: no config URL provided Nov 8 00:21:04.956071 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:21:04.956081 ignition[755]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:21:04.956101 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 8 00:21:04.972565 ignition[755]: GET result: OK Nov 8 00:21:04.972776 ignition[755]: parsing config with SHA512: fbbe54915f0dbe49761afbdd254dd38c4b0c79f1965919d7a2825782d0dce4228e2dbd405dd1f8b4cbf3f0350cf32ab1c354ae04a5aca34f9e8e41902cfffbe4 Nov 8 00:21:04.978284 unknown[755]: fetched base config from "system" Nov 8 00:21:04.978309 unknown[755]: fetched base config from "system" Nov 8 00:21:04.979048 ignition[755]: fetch: fetch complete Nov 8 00:21:04.978321 unknown[755]: fetched user config from "digitalocean" Nov 8 00:21:04.979058 ignition[755]: fetch: fetch passed Nov 8 00:21:04.979163 ignition[755]: Ignition finished successfully Nov 8 00:21:04.982306 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:21:04.989245 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:21:05.021053 ignition[762]: Ignition 2.19.0 Nov 8 00:21:05.021069 ignition[762]: Stage: kargs Nov 8 00:21:05.021367 ignition[762]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:21:05.021385 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:21:05.022893 ignition[762]: kargs: kargs passed Nov 8 00:21:05.024026 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:21:05.022967 ignition[762]: Ignition finished successfully Nov 8 00:21:05.032340 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:21:05.066800 ignition[769]: Ignition 2.19.0 Nov 8 00:21:05.066811 ignition[769]: Stage: disks Nov 8 00:21:05.067053 ignition[769]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:21:05.067064 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:21:05.068673 ignition[769]: disks: disks passed Nov 8 00:21:05.070593 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:21:05.068747 ignition[769]: Ignition finished successfully Nov 8 00:21:05.072040 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:21:05.076936 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:21:05.077903 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:21:05.078842 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:21:05.079893 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:21:05.086325 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:21:05.108125 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:21:05.111055 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:21:05.118147 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:21:05.237000 kernel: EXT4-fs (vda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:21:05.237889 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:21:05.239482 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:21:05.250184 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:21:05.253131 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:21:05.256290 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Nov 8 00:21:05.268026 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (786) Nov 8 00:21:05.268112 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:21:05.270411 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:21:05.273136 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:21:05.273377 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:21:05.274419 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:21:05.274460 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:21:05.276725 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:21:05.282202 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:21:05.291586 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:21:05.292371 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:21:05.356109 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:21:05.357487 coreos-metadata[789]: Nov 08 00:21:05.357 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 8 00:21:05.365718 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:21:05.368646 coreos-metadata[788]: Nov 08 00:21:05.368 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 8 00:21:05.370348 coreos-metadata[789]: Nov 08 00:21:05.370 INFO Fetch successful Nov 8 00:21:05.374215 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:21:05.379074 coreos-metadata[789]: Nov 08 00:21:05.379 INFO wrote hostname ci-4081.3.6-n-6d313a6df2 to /sysroot/etc/hostname Nov 8 00:21:05.380367 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:21:05.382294 coreos-metadata[788]: Nov 08 00:21:05.381 INFO Fetch successful Nov 8 00:21:05.386227 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:21:05.389291 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Nov 8 00:21:05.390144 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Nov 8 00:21:05.498364 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:21:05.506186 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:21:05.509014 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:21:05.522024 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:21:05.547054 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:21:05.557403 ignition[909]: INFO : Ignition 2.19.0 Nov 8 00:21:05.557403 ignition[909]: INFO : Stage: mount Nov 8 00:21:05.559071 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:21:05.559071 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:21:05.561690 ignition[909]: INFO : mount: mount passed Nov 8 00:21:05.561690 ignition[909]: INFO : Ignition finished successfully Nov 8 00:21:05.562191 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:21:05.569203 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:21:05.634902 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:21:05.644319 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:21:05.669071 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (920) Nov 8 00:21:05.673364 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:21:05.673463 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:21:05.675911 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:21:05.683016 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:21:05.684072 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:21:05.716454 ignition[936]: INFO : Ignition 2.19.0 Nov 8 00:21:05.716454 ignition[936]: INFO : Stage: files Nov 8 00:21:05.717937 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:21:05.717937 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:21:05.719530 ignition[936]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:21:05.721343 ignition[936]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:21:05.721343 ignition[936]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:21:05.726084 ignition[936]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:21:05.726943 ignition[936]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:21:05.727841 ignition[936]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:21:05.727747 unknown[936]: wrote ssh authorized keys file for user: core Nov 8 00:21:05.730954 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:21:05.732108 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 00:21:05.788906 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:21:05.837670 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:21:05.839605 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:21:05.839605 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:21:05.839605 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:21:05.843031 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:21:05.843031 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:21:05.843031 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:21:05.843031 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:21:05.843031 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:21:05.843031 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:21:05.843031 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:21:05.843031 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:21:05.843031 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:21:05.843031 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:21:05.843031 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 00:21:06.129234 systemd-networkd[747]: eth0: Gained IPv6LL Nov 8 00:21:06.266224 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:21:06.513566 systemd-networkd[747]: eth1: Gained IPv6LL Nov 8 00:21:06.604562 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:21:06.605862 ignition[936]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:21:06.607769 ignition[936]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:21:06.609861 ignition[936]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:21:06.609861 ignition[936]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:21:06.609861 ignition[936]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:21:06.609861 ignition[936]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:21:06.609861 ignition[936]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:21:06.609861 ignition[936]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:21:06.609861 ignition[936]: INFO : files: files passed Nov 8 00:21:06.609861 ignition[936]: INFO : Ignition finished successfully Nov 8 00:21:06.611565 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:21:06.619296 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:21:06.624508 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:21:06.627234 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:21:06.627372 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:21:06.649186 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:21:06.649186 initrd-setup-root-after-ignition[965]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:21:06.652650 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:21:06.655556 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:21:06.656327 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:21:06.666451 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:21:06.714063 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:21:06.714241 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:21:06.715888 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:21:06.716440 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:21:06.717462 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:21:06.723189 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:21:06.738878 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:21:06.745212 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:21:06.763637 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:21:06.764408 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:21:06.765490 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:21:06.766463 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:21:06.766601 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:21:06.767656 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:21:06.768251 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:21:06.769244 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:21:06.770045 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:21:06.770964 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:21:06.772002 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:21:06.772893 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:21:06.773882 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:21:06.774749 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:21:06.775695 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:21:06.776637 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:21:06.776807 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:21:06.777889 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:21:06.778910 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:21:06.779920 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:21:06.781072 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:21:06.781771 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:21:06.781961 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:21:06.783066 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:21:06.783235 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:21:06.784275 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:21:06.784414 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:21:06.785522 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:21:06.785648 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:21:06.794317 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:21:06.796801 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:21:06.797020 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:21:06.800326 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:21:06.802853 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:21:06.803095 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:21:06.804409 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:21:06.804516 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:21:06.812496 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:21:06.812650 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:21:06.827007 ignition[990]: INFO : Ignition 2.19.0 Nov 8 00:21:06.827007 ignition[990]: INFO : Stage: umount Nov 8 00:21:06.827007 ignition[990]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:21:06.827007 ignition[990]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:21:06.830702 ignition[990]: INFO : umount: umount passed Nov 8 00:21:06.830702 ignition[990]: INFO : Ignition finished successfully Nov 8 00:21:06.831362 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:21:06.831496 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:21:06.834726 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:21:06.834845 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:21:06.842629 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:21:06.842721 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:21:06.843260 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:21:06.843308 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:21:06.844378 systemd[1]: Stopped target network.target - Network. Nov 8 00:21:06.845410 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:21:06.845490 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:21:06.846532 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:21:06.847380 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:21:06.851255 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:21:06.852023 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:21:06.853097 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:21:06.854117 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:21:06.854185 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:21:06.855040 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:21:06.855079 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:21:06.855809 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:21:06.855869 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:21:06.856813 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:21:06.856880 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:21:06.857949 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:21:06.858834 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:21:06.861338 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:21:06.861989 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:21:06.862626 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:21:06.865885 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:21:06.866037 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:21:06.866308 systemd-networkd[747]: eth0: DHCPv6 lease lost Nov 8 00:21:06.867619 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:21:06.867758 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:21:06.871152 systemd-networkd[747]: eth1: DHCPv6 lease lost Nov 8 00:21:06.872489 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:21:06.872593 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:21:06.873914 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:21:06.874053 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:21:06.875769 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:21:06.875842 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:21:06.884265 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:21:06.885105 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:21:06.885189 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:21:06.887446 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:21:06.887533 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:21:06.888021 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:21:06.888083 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:21:06.889158 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:21:06.909018 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:21:06.909296 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:21:06.911538 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:21:06.911623 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:21:06.912577 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:21:06.912637 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:21:06.913833 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:21:06.913915 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:21:06.915552 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:21:06.915639 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:21:06.916972 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:21:06.917092 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:21:06.923281 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:21:06.923817 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:21:06.923895 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:21:06.924932 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:21:06.925017 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:21:06.927170 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:21:06.927819 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:21:06.939064 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:21:06.939228 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:21:06.941330 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:21:06.950678 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:21:06.961693 systemd[1]: Switching root. Nov 8 00:21:07.004453 systemd-journald[185]: Journal stopped Nov 8 00:21:08.279443 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Nov 8 00:21:08.279789 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:21:08.279806 kernel: SELinux: policy capability open_perms=1 Nov 8 00:21:08.279818 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:21:08.279831 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:21:08.279843 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:21:08.279855 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:21:08.279871 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:21:08.279888 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:21:08.279900 kernel: audit: type=1403 audit(1762561267.166:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:21:08.279914 systemd[1]: Successfully loaded SELinux policy in 50.635ms. Nov 8 00:21:08.279935 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.745ms. Nov 8 00:21:08.279949 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:21:08.279963 systemd[1]: Detected virtualization kvm. Nov 8 00:21:08.283051 systemd[1]: Detected architecture x86-64. Nov 8 00:21:08.283084 systemd[1]: Detected first boot. Nov 8 00:21:08.283106 systemd[1]: Hostname set to . Nov 8 00:21:08.283120 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:21:08.283133 zram_generator::config[1033]: No configuration found. Nov 8 00:21:08.283147 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:21:08.283160 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:21:08.283172 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:21:08.283186 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:21:08.283201 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:21:08.283217 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:21:08.283231 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:21:08.283244 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:21:08.283257 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:21:08.283270 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:21:08.283283 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:21:08.283296 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:21:08.283308 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:21:08.283321 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:21:08.283337 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:21:08.283350 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:21:08.283363 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:21:08.283375 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:21:08.283387 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:21:08.283399 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:21:08.283411 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:21:08.283427 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:21:08.283440 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:21:08.283452 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:21:08.283464 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:21:08.283479 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:21:08.283491 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:21:08.283503 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:21:08.283516 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:21:08.283531 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:21:08.283543 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:21:08.283556 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:21:08.283569 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:21:08.283581 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:21:08.283600 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:21:08.283613 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:21:08.283625 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:21:08.283638 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:21:08.283654 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:21:08.283666 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:21:08.283679 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:21:08.283692 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:21:08.283704 systemd[1]: Reached target machines.target - Containers. Nov 8 00:21:08.283716 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:21:08.283729 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:21:08.283741 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:21:08.283757 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:21:08.283769 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:21:08.283782 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:21:08.283794 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:21:08.283807 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:21:08.283820 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:21:08.283833 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:21:08.283845 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:21:08.283858 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:21:08.283873 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:21:08.283886 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:21:08.283898 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:21:08.283911 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:21:08.283923 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:21:08.283936 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:21:08.283948 kernel: fuse: init (API version 7.39) Nov 8 00:21:08.283962 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:21:08.283982 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:21:08.283998 systemd[1]: Stopped verity-setup.service. Nov 8 00:21:08.284012 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:21:08.284025 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:21:08.284059 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:21:08.284073 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:21:08.284087 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:21:08.284102 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:21:08.284115 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:21:08.284127 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:21:08.284139 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:21:08.284151 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:21:08.284164 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:21:08.284179 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:21:08.284192 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:21:08.284204 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:21:08.284217 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:21:08.284230 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:21:08.284242 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:21:08.284256 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:21:08.284271 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:21:08.284329 systemd-journald[1113]: Collecting audit messages is disabled. Nov 8 00:21:08.284366 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:21:08.284380 systemd-journald[1113]: Journal started Nov 8 00:21:08.284405 systemd-journald[1113]: Runtime Journal (/run/log/journal/1ef3bb0b2d724e9aa8ea35d66f6f337d) is 4.9M, max 39.3M, 34.4M free. Nov 8 00:21:08.291040 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:21:07.912785 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:21:07.938020 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 8 00:21:07.938579 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:21:08.301115 kernel: loop: module loaded Nov 8 00:21:08.301229 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:21:08.301253 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:21:08.305007 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:21:08.312118 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:21:08.330014 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:21:08.337996 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:21:08.338105 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:21:08.359802 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:21:08.359898 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:21:08.366010 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:21:08.370010 kernel: ACPI: bus type drm_connector registered Nov 8 00:21:08.378051 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:21:08.391505 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:21:08.402077 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:21:08.402594 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:21:08.406676 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:21:08.407172 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:21:08.408803 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:21:08.408951 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:21:08.410844 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:21:08.411973 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:21:08.416044 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:21:08.432393 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:21:08.453303 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:21:08.462574 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:21:08.463060 kernel: loop0: detected capacity change from 0 to 142488 Nov 8 00:21:08.475193 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:21:08.478208 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:21:08.481290 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:21:08.493057 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:21:08.505675 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:21:08.521496 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:21:08.521258 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:21:08.550394 systemd-journald[1113]: Time spent on flushing to /var/log/journal/1ef3bb0b2d724e9aa8ea35d66f6f337d is 115.793ms for 993 entries. Nov 8 00:21:08.550394 systemd-journald[1113]: System Journal (/var/log/journal/1ef3bb0b2d724e9aa8ea35d66f6f337d) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:21:08.697335 systemd-journald[1113]: Received client request to flush runtime journal. Nov 8 00:21:08.697433 kernel: loop1: detected capacity change from 0 to 224512 Nov 8 00:21:08.697462 kernel: loop2: detected capacity change from 0 to 140768 Nov 8 00:21:08.697481 kernel: loop3: detected capacity change from 0 to 8 Nov 8 00:21:08.549876 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:21:08.551730 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:21:08.605881 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:21:08.629597 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:21:08.637217 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:21:08.701823 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:21:08.710120 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Nov 8 00:21:08.710141 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Nov 8 00:21:08.727266 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:21:08.740011 kernel: loop4: detected capacity change from 0 to 142488 Nov 8 00:21:08.772351 kernel: loop5: detected capacity change from 0 to 224512 Nov 8 00:21:08.793027 kernel: loop6: detected capacity change from 0 to 140768 Nov 8 00:21:08.826004 kernel: loop7: detected capacity change from 0 to 8 Nov 8 00:21:08.828745 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Nov 8 00:21:08.829371 (sd-merge)[1178]: Merged extensions into '/usr'. Nov 8 00:21:08.840189 systemd[1]: Reloading requested from client PID 1135 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:21:08.840216 systemd[1]: Reloading... Nov 8 00:21:09.022037 zram_generator::config[1207]: No configuration found. Nov 8 00:21:09.111574 ldconfig[1131]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:21:09.234603 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:21:09.291968 systemd[1]: Reloading finished in 451 ms. Nov 8 00:21:09.317050 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:21:09.319865 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:21:09.328307 systemd[1]: Starting ensure-sysext.service... Nov 8 00:21:09.335034 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:21:09.351058 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:21:09.351087 systemd[1]: Reloading... Nov 8 00:21:09.391912 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:21:09.392354 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:21:09.393331 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:21:09.393595 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Nov 8 00:21:09.393659 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Nov 8 00:21:09.403387 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:21:09.403400 systemd-tmpfiles[1248]: Skipping /boot Nov 8 00:21:09.424680 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:21:09.424695 systemd-tmpfiles[1248]: Skipping /boot Nov 8 00:21:09.475029 zram_generator::config[1274]: No configuration found. Nov 8 00:21:09.614758 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:21:09.665932 systemd[1]: Reloading finished in 314 ms. Nov 8 00:21:09.682931 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:21:09.688448 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:21:09.697285 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:21:09.701781 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:21:09.707272 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:21:09.713229 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:21:09.724162 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:21:09.726057 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:21:09.731633 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:21:09.731846 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:21:09.743360 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:21:09.749267 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:21:09.757343 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:21:09.758003 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:21:09.758133 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:21:09.763064 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:21:09.763294 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:21:09.763469 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:21:09.772362 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:21:09.772936 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:21:09.773759 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:21:09.774279 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:21:09.783426 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:21:09.783809 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:21:09.797330 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:21:09.800716 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Nov 8 00:21:09.802273 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:21:09.802955 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:21:09.803181 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:21:09.805161 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:21:09.813084 systemd[1]: Finished ensure-sysext.service. Nov 8 00:21:09.828399 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:21:09.830087 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:21:09.830627 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:21:09.855929 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:21:09.867445 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:21:09.879418 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:21:09.880082 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:21:09.886397 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:21:09.886602 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:21:09.887949 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:21:09.890643 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:21:09.891895 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:21:09.892215 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:21:09.899354 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:21:09.907280 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:21:09.917951 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:21:09.918795 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:21:09.918994 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:21:09.940120 augenrules[1372]: No rules Nov 8 00:21:09.948696 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:21:09.964819 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:21:10.038773 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:21:10.039675 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:21:10.133424 systemd-networkd[1350]: lo: Link UP Nov 8 00:21:10.135122 systemd-networkd[1350]: lo: Gained carrier Nov 8 00:21:10.142252 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:21:10.162764 systemd-resolved[1323]: Positive Trust Anchors: Nov 8 00:21:10.163245 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:21:10.163354 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:21:10.172469 systemd-resolved[1323]: Using system hostname 'ci-4081.3.6-n-6d313a6df2'. Nov 8 00:21:10.177673 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:21:10.178415 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:21:10.182705 systemd-networkd[1350]: Enumeration completed Nov 8 00:21:10.183054 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:21:10.184073 systemd[1]: Reached target network.target - Network. Nov 8 00:21:10.190386 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:21:10.228015 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1352) Nov 8 00:21:10.231173 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 8 00:21:10.231940 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:21:10.232141 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:21:10.240415 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:21:10.245292 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:21:10.251248 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:21:10.253222 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:21:10.253295 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:21:10.253321 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:21:10.262664 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:21:10.262899 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:21:10.277727 kernel: ISO 9660 Extensions: RRIP_1991A Nov 8 00:21:10.280673 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 8 00:21:10.293879 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:21:10.294234 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:21:10.301880 systemd-networkd[1350]: eth0: Configuring with /run/systemd/network/10-5a:02:b9:0f:40:01.network. Nov 8 00:21:10.302223 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:21:10.335237 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:21:10.339696 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:21:10.385724 systemd-networkd[1350]: eth0: Link UP Nov 8 00:21:10.385853 systemd-networkd[1350]: eth0: Gained carrier Nov 8 00:21:10.394227 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:21:10.446126 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Nov 8 00:21:10.551311 systemd-networkd[1350]: eth1: Configuring with /run/systemd/network/10-4e:f5:1d:d6:fb:b0.network. Nov 8 00:21:10.555303 systemd-networkd[1350]: eth1: Link UP Nov 8 00:21:10.555317 systemd-networkd[1350]: eth1: Gained carrier Nov 8 00:21:10.557184 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Nov 8 00:21:10.564277 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Nov 8 00:21:10.565910 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Nov 8 00:21:10.588524 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:21:10.602930 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:21:10.611271 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 8 00:21:10.630357 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 8 00:21:10.634874 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:21:10.703899 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:21:10.816165 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 8 00:21:11.031027 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:21:11.064175 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 8 00:21:11.064323 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 8 00:21:11.064738 kernel: Console: switching to colour dummy device 80x25 Nov 8 00:21:11.064774 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 8 00:21:11.064799 kernel: [drm] features: -context_init Nov 8 00:21:11.069064 kernel: [drm] number of scanouts: 1 Nov 8 00:21:11.069189 kernel: [drm] number of cap sets: 0 Nov 8 00:21:11.070452 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:21:11.077048 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Nov 8 00:21:11.095408 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:21:11.095692 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:21:11.101621 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 8 00:21:11.101723 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 00:21:11.111428 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:21:11.117031 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 8 00:21:11.132964 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:21:11.134218 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:21:11.153558 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:21:11.294411 kernel: EDAC MC: Ver: 3.0.0 Nov 8 00:21:11.313954 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:21:11.326883 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:21:11.334418 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:21:11.363048 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:21:11.397673 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:21:11.401268 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:21:11.401489 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:21:11.401793 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:21:11.401970 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:21:11.402662 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:21:11.404248 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:21:11.405505 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:21:11.405623 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:21:11.405664 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:21:11.405736 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:21:11.408860 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:21:11.411604 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:21:11.424500 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:21:11.427222 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:21:11.429863 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:21:11.431623 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:21:11.433393 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:21:11.436104 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:21:11.436163 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:21:11.442365 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:21:11.448246 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:21:11.454299 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:21:11.461316 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:21:11.464267 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:21:11.475402 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:21:11.478139 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:21:11.489306 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:21:11.498170 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:21:11.512221 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:21:11.518350 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:21:11.529701 extend-filesystems[1438]: Found loop4 Nov 8 00:21:11.538096 extend-filesystems[1438]: Found loop5 Nov 8 00:21:11.538096 extend-filesystems[1438]: Found loop6 Nov 8 00:21:11.538096 extend-filesystems[1438]: Found loop7 Nov 8 00:21:11.538096 extend-filesystems[1438]: Found vda Nov 8 00:21:11.538096 extend-filesystems[1438]: Found vda1 Nov 8 00:21:11.538096 extend-filesystems[1438]: Found vda2 Nov 8 00:21:11.538096 extend-filesystems[1438]: Found vda3 Nov 8 00:21:11.538096 extend-filesystems[1438]: Found usr Nov 8 00:21:11.538096 extend-filesystems[1438]: Found vda4 Nov 8 00:21:11.538096 extend-filesystems[1438]: Found vda6 Nov 8 00:21:11.538096 extend-filesystems[1438]: Found vda7 Nov 8 00:21:11.538096 extend-filesystems[1438]: Found vda9 Nov 8 00:21:11.538096 extend-filesystems[1438]: Checking size of /dev/vda9 Nov 8 00:21:11.535320 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:21:11.616619 extend-filesystems[1438]: Resized partition /dev/vda9 Nov 8 00:21:11.640704 jq[1437]: false Nov 8 00:21:11.538548 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:21:11.641071 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:21:11.544239 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:21:11.655623 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 8 00:21:11.546147 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:21:11.576170 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:21:11.589068 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:21:11.601609 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:21:11.602702 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:21:11.617220 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:21:11.618019 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:21:11.669334 update_engine[1446]: I20251108 00:21:11.662781 1446 main.cc:92] Flatcar Update Engine starting Nov 8 00:21:11.669741 jq[1451]: true Nov 8 00:21:11.680578 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:21:11.699660 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:21:11.699996 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:21:11.714955 dbus-daemon[1436]: [system] SELinux support is enabled Nov 8 00:21:11.715339 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:21:11.726420 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:21:11.726480 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:21:11.728433 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:21:11.728611 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 8 00:21:11.728652 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:21:11.744699 tar[1456]: linux-amd64/LICENSE Nov 8 00:21:11.744699 tar[1456]: linux-amd64/helm Nov 8 00:21:11.751379 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:21:11.759228 update_engine[1446]: I20251108 00:21:11.755477 1446 update_check_scheduler.cc:74] Next update check in 6m53s Nov 8 00:21:11.762290 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:21:11.786060 jq[1470]: true Nov 8 00:21:11.802605 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1349) Nov 8 00:21:11.821697 coreos-metadata[1435]: Nov 08 00:21:11.820 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 8 00:21:11.841751 coreos-metadata[1435]: Nov 08 00:21:11.838 INFO Fetch successful Nov 8 00:21:11.886780 systemd-logind[1444]: New seat seat0. Nov 8 00:21:11.893743 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:21:11.893773 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:21:11.894099 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:21:11.910382 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 8 00:21:11.927128 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 8 00:21:11.927128 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 8 00:21:11.927128 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 8 00:21:11.929629 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Nov 8 00:21:11.929629 extend-filesystems[1438]: Found vdb Nov 8 00:21:11.932237 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:21:11.933122 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:21:11.953570 systemd-networkd[1350]: eth0: Gained IPv6LL Nov 8 00:21:11.956211 systemd-networkd[1350]: eth1: Gained IPv6LL Nov 8 00:21:11.956665 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Nov 8 00:21:11.980729 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:21:11.985422 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:21:12.003420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:21:12.029376 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:21:12.081019 bash[1506]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:21:12.086134 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:21:12.097427 systemd[1]: Starting sshkeys.service... Nov 8 00:21:12.103147 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:21:12.109654 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:21:12.147251 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:21:12.185701 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:21:12.196564 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:21:12.306890 coreos-metadata[1521]: Nov 08 00:21:12.306 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 8 00:21:12.310431 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:21:12.323538 coreos-metadata[1521]: Nov 08 00:21:12.323 INFO Fetch successful Nov 8 00:21:12.352928 unknown[1521]: wrote ssh authorized keys file for user: core Nov 8 00:21:12.355121 containerd[1469]: time="2025-11-08T00:21:12.353964368Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:21:12.397356 update-ssh-keys[1528]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:21:12.399210 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:21:12.403778 systemd[1]: Finished sshkeys.service. Nov 8 00:21:12.430622 containerd[1469]: time="2025-11-08T00:21:12.430330114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:21:12.436041 containerd[1469]: time="2025-11-08T00:21:12.433847899Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:21:12.436041 containerd[1469]: time="2025-11-08T00:21:12.433898053Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:21:12.436041 containerd[1469]: time="2025-11-08T00:21:12.433922496Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:21:12.436041 containerd[1469]: time="2025-11-08T00:21:12.434224727Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:21:12.436041 containerd[1469]: time="2025-11-08T00:21:12.434250781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:21:12.436041 containerd[1469]: time="2025-11-08T00:21:12.434316976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:21:12.436041 containerd[1469]: time="2025-11-08T00:21:12.434349391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:21:12.436041 containerd[1469]: time="2025-11-08T00:21:12.434621744Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:21:12.436041 containerd[1469]: time="2025-11-08T00:21:12.434642334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:21:12.436041 containerd[1469]: time="2025-11-08T00:21:12.434666364Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:21:12.436041 containerd[1469]: time="2025-11-08T00:21:12.434685716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:21:12.436641 containerd[1469]: time="2025-11-08T00:21:12.434781886Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:21:12.436641 containerd[1469]: time="2025-11-08T00:21:12.435237370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:21:12.436641 containerd[1469]: time="2025-11-08T00:21:12.435473206Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:21:12.436641 containerd[1469]: time="2025-11-08T00:21:12.435504837Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:21:12.436641 containerd[1469]: time="2025-11-08T00:21:12.435647537Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:21:12.436641 containerd[1469]: time="2025-11-08T00:21:12.435735554Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:21:12.441017 containerd[1469]: time="2025-11-08T00:21:12.439855804Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:21:12.441017 containerd[1469]: time="2025-11-08T00:21:12.439945979Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:21:12.441017 containerd[1469]: time="2025-11-08T00:21:12.440009627Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:21:12.441017 containerd[1469]: time="2025-11-08T00:21:12.440035476Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:21:12.441017 containerd[1469]: time="2025-11-08T00:21:12.440085068Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:21:12.441017 containerd[1469]: time="2025-11-08T00:21:12.440354146Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:21:12.441017 containerd[1469]: time="2025-11-08T00:21:12.441032512Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:21:12.441331 containerd[1469]: time="2025-11-08T00:21:12.441227797Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:21:12.441331 containerd[1469]: time="2025-11-08T00:21:12.441256587Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:21:12.441331 containerd[1469]: time="2025-11-08T00:21:12.441278263Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:21:12.441331 containerd[1469]: time="2025-11-08T00:21:12.441299318Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:21:12.441331 containerd[1469]: time="2025-11-08T00:21:12.441320778Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:21:12.441438 containerd[1469]: time="2025-11-08T00:21:12.441339049Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:21:12.441438 containerd[1469]: time="2025-11-08T00:21:12.441361556Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:21:12.441438 containerd[1469]: time="2025-11-08T00:21:12.441381137Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:21:12.441438 containerd[1469]: time="2025-11-08T00:21:12.441398994Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:21:12.441544 containerd[1469]: time="2025-11-08T00:21:12.441455485Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:21:12.441544 containerd[1469]: time="2025-11-08T00:21:12.441497073Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:21:12.441544 containerd[1469]: time="2025-11-08T00:21:12.441538804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:21:12.441611 containerd[1469]: time="2025-11-08T00:21:12.441557941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:21:12.441611 containerd[1469]: time="2025-11-08T00:21:12.441571468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:21:12.441611 containerd[1469]: time="2025-11-08T00:21:12.441585103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:21:12.441611 containerd[1469]: time="2025-11-08T00:21:12.441598742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:21:12.441702 containerd[1469]: time="2025-11-08T00:21:12.441620570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:21:12.441702 containerd[1469]: time="2025-11-08T00:21:12.441635722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:21:12.441702 containerd[1469]: time="2025-11-08T00:21:12.441660091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:21:12.441702 containerd[1469]: time="2025-11-08T00:21:12.441672690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:21:12.441784 containerd[1469]: time="2025-11-08T00:21:12.441687016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:21:12.441784 containerd[1469]: time="2025-11-08T00:21:12.441767601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:21:12.441784 containerd[1469]: time="2025-11-08T00:21:12.441779606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:21:12.441849 containerd[1469]: time="2025-11-08T00:21:12.441801736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:21:12.441849 containerd[1469]: time="2025-11-08T00:21:12.441822226Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:21:12.441849 containerd[1469]: time="2025-11-08T00:21:12.441844846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:21:12.441911 containerd[1469]: time="2025-11-08T00:21:12.441882942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:21:12.441911 containerd[1469]: time="2025-11-08T00:21:12.441896958Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:21:12.441958 containerd[1469]: time="2025-11-08T00:21:12.441942586Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:21:12.442492 containerd[1469]: time="2025-11-08T00:21:12.442004276Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:21:12.442492 containerd[1469]: time="2025-11-08T00:21:12.442024063Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:21:12.442492 containerd[1469]: time="2025-11-08T00:21:12.442037808Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:21:12.442492 containerd[1469]: time="2025-11-08T00:21:12.442047644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:21:12.442492 containerd[1469]: time="2025-11-08T00:21:12.442070673Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:21:12.442492 containerd[1469]: time="2025-11-08T00:21:12.442086325Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:21:12.442492 containerd[1469]: time="2025-11-08T00:21:12.442105613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:21:12.442694 containerd[1469]: time="2025-11-08T00:21:12.442501314Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:21:12.442694 containerd[1469]: time="2025-11-08T00:21:12.442613929Z" level=info msg="Connect containerd service" Nov 8 00:21:12.442694 containerd[1469]: time="2025-11-08T00:21:12.442690025Z" level=info msg="using legacy CRI server" Nov 8 00:21:12.443371 containerd[1469]: time="2025-11-08T00:21:12.442705184Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:21:12.443371 containerd[1469]: time="2025-11-08T00:21:12.442916366Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:21:12.448683 containerd[1469]: time="2025-11-08T00:21:12.444325626Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:21:12.448683 containerd[1469]: time="2025-11-08T00:21:12.444657178Z" level=info msg="Start subscribing containerd event" Nov 8 00:21:12.448683 containerd[1469]: time="2025-11-08T00:21:12.444728336Z" level=info msg="Start recovering state" Nov 8 00:21:12.448683 containerd[1469]: time="2025-11-08T00:21:12.444804462Z" level=info msg="Start event monitor" Nov 8 00:21:12.448683 containerd[1469]: time="2025-11-08T00:21:12.444824772Z" level=info msg="Start snapshots syncer" Nov 8 00:21:12.448683 containerd[1469]: time="2025-11-08T00:21:12.444834697Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:21:12.448683 containerd[1469]: time="2025-11-08T00:21:12.444843300Z" level=info msg="Start streaming server" Nov 8 00:21:12.448683 containerd[1469]: time="2025-11-08T00:21:12.445928200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:21:12.448683 containerd[1469]: time="2025-11-08T00:21:12.446052874Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:21:12.446302 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:21:12.449104 containerd[1469]: time="2025-11-08T00:21:12.448958614Z" level=info msg="containerd successfully booted in 0.102836s" Nov 8 00:21:12.709459 sshd_keygen[1473]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:21:12.823633 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:21:12.834268 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:21:12.874700 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:21:12.874919 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:21:12.885750 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:21:12.919964 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:21:12.932695 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:21:12.943465 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:21:12.949530 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:21:13.115615 tar[1456]: linux-amd64/README.md Nov 8 00:21:13.133858 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:21:13.729021 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:21:13.731478 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:21:13.737435 systemd[1]: Startup finished in 1.240s (kernel) + 5.447s (initrd) + 6.620s (userspace) = 13.308s. Nov 8 00:21:13.742663 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:21:14.484908 kubelet[1559]: E1108 00:21:14.484743 1559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:21:14.487761 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:21:14.487931 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:21:14.488437 systemd[1]: kubelet.service: Consumed 1.423s CPU time. Nov 8 00:21:15.645606 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:21:15.659474 systemd[1]: Started sshd@0-24.199.105.232:22-139.178.68.195:45784.service - OpenSSH per-connection server daemon (139.178.68.195:45784). Nov 8 00:21:15.733467 sshd[1571]: Accepted publickey for core from 139.178.68.195 port 45784 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:21:15.737020 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:15.753723 systemd-logind[1444]: New session 1 of user core. Nov 8 00:21:15.756529 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:21:15.769559 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:21:15.790417 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:21:15.797443 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:21:15.817146 (systemd)[1575]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:21:15.973421 systemd[1575]: Queued start job for default target default.target. Nov 8 00:21:15.985761 systemd[1575]: Created slice app.slice - User Application Slice. Nov 8 00:21:15.985805 systemd[1575]: Reached target paths.target - Paths. Nov 8 00:21:15.985822 systemd[1575]: Reached target timers.target - Timers. Nov 8 00:21:15.988058 systemd[1575]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:21:16.017552 systemd[1575]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:21:16.017762 systemd[1575]: Reached target sockets.target - Sockets. Nov 8 00:21:16.017782 systemd[1575]: Reached target basic.target - Basic System. Nov 8 00:21:16.017863 systemd[1575]: Reached target default.target - Main User Target. Nov 8 00:21:16.017922 systemd[1575]: Startup finished in 187ms. Nov 8 00:21:16.018407 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:21:16.025292 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:21:16.108160 systemd[1]: Started sshd@1-24.199.105.232:22-139.178.68.195:45800.service - OpenSSH per-connection server daemon (139.178.68.195:45800). Nov 8 00:21:16.159864 sshd[1586]: Accepted publickey for core from 139.178.68.195 port 45800 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:21:16.162583 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:16.170620 systemd-logind[1444]: New session 2 of user core. Nov 8 00:21:16.179590 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:21:16.250526 sshd[1586]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:16.265718 systemd[1]: sshd@1-24.199.105.232:22-139.178.68.195:45800.service: Deactivated successfully. Nov 8 00:21:16.269356 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:21:16.273926 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:21:16.280514 systemd[1]: Started sshd@2-24.199.105.232:22-139.178.68.195:45804.service - OpenSSH per-connection server daemon (139.178.68.195:45804). Nov 8 00:21:16.282391 systemd-logind[1444]: Removed session 2. Nov 8 00:21:16.325132 sshd[1593]: Accepted publickey for core from 139.178.68.195 port 45804 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:21:16.327626 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:16.337317 systemd-logind[1444]: New session 3 of user core. Nov 8 00:21:16.344281 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:21:16.406406 sshd[1593]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:16.416398 systemd[1]: sshd@2-24.199.105.232:22-139.178.68.195:45804.service: Deactivated successfully. Nov 8 00:21:16.419340 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:21:16.422311 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:21:16.427510 systemd[1]: Started sshd@3-24.199.105.232:22-139.178.68.195:45810.service - OpenSSH per-connection server daemon (139.178.68.195:45810). Nov 8 00:21:16.429153 systemd-logind[1444]: Removed session 3. Nov 8 00:21:16.480954 sshd[1600]: Accepted publickey for core from 139.178.68.195 port 45810 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:21:16.483649 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:16.490090 systemd-logind[1444]: New session 4 of user core. Nov 8 00:21:16.501370 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:21:16.569253 sshd[1600]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:16.583195 systemd[1]: sshd@3-24.199.105.232:22-139.178.68.195:45810.service: Deactivated successfully. Nov 8 00:21:16.586863 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:21:16.590306 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:21:16.595541 systemd[1]: Started sshd@4-24.199.105.232:22-139.178.68.195:45818.service - OpenSSH per-connection server daemon (139.178.68.195:45818). Nov 8 00:21:16.597472 systemd-logind[1444]: Removed session 4. Nov 8 00:21:16.641641 sshd[1607]: Accepted publickey for core from 139.178.68.195 port 45818 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:21:16.643749 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:16.652067 systemd-logind[1444]: New session 5 of user core. Nov 8 00:21:16.654263 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:21:16.730432 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:21:16.730835 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:21:16.750970 sudo[1610]: pam_unix(sudo:session): session closed for user root Nov 8 00:21:16.755820 sshd[1607]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:16.767184 systemd[1]: sshd@4-24.199.105.232:22-139.178.68.195:45818.service: Deactivated successfully. Nov 8 00:21:16.769715 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:21:16.772486 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:21:16.779608 systemd[1]: Started sshd@5-24.199.105.232:22-139.178.68.195:45824.service - OpenSSH per-connection server daemon (139.178.68.195:45824). Nov 8 00:21:16.781789 systemd-logind[1444]: Removed session 5. Nov 8 00:21:16.827153 sshd[1615]: Accepted publickey for core from 139.178.68.195 port 45824 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:21:16.830078 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:16.839376 systemd-logind[1444]: New session 6 of user core. Nov 8 00:21:16.848476 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:21:16.914053 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:21:16.914701 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:21:16.922075 sudo[1619]: pam_unix(sudo:session): session closed for user root Nov 8 00:21:16.932299 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:21:16.932824 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:21:16.958407 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:21:16.961243 auditctl[1622]: No rules Nov 8 00:21:16.966963 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:21:16.967356 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:21:16.975465 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:21:16.979219 systemd[1]: Started sshd@6-24.199.105.232:22-140.233.190.96:43106.service - OpenSSH per-connection server daemon (140.233.190.96:43106). Nov 8 00:21:17.022366 augenrules[1642]: No rules Nov 8 00:21:17.024848 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:21:17.026929 sudo[1618]: pam_unix(sudo:session): session closed for user root Nov 8 00:21:17.031892 sshd[1615]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:17.046532 systemd[1]: sshd@5-24.199.105.232:22-139.178.68.195:45824.service: Deactivated successfully. Nov 8 00:21:17.048970 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:21:17.051522 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:21:17.057521 systemd[1]: Started sshd@7-24.199.105.232:22-139.178.68.195:45828.service - OpenSSH per-connection server daemon (139.178.68.195:45828). Nov 8 00:21:17.059327 systemd-logind[1444]: Removed session 6. Nov 8 00:21:17.119157 sshd[1650]: Accepted publickey for core from 139.178.68.195 port 45828 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:21:17.121655 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:17.128407 systemd-logind[1444]: New session 7 of user core. Nov 8 00:21:17.139399 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:21:17.205821 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:21:17.206318 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:21:17.812509 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:21:17.822837 (dockerd)[1670]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:21:18.319323 dockerd[1670]: time="2025-11-08T00:21:18.319249360Z" level=info msg="Starting up" Nov 8 00:21:18.475579 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1109476617-merged.mount: Deactivated successfully. Nov 8 00:21:18.500957 dockerd[1670]: time="2025-11-08T00:21:18.500862359Z" level=info msg="Loading containers: start." Nov 8 00:21:18.654141 kernel: Initializing XFRM netlink socket Nov 8 00:21:18.687543 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Nov 8 00:21:18.691383 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Nov 8 00:21:18.709734 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Nov 8 00:21:18.762247 systemd-networkd[1350]: docker0: Link UP Nov 8 00:21:18.762628 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Nov 8 00:21:18.785936 dockerd[1670]: time="2025-11-08T00:21:18.785741072Z" level=info msg="Loading containers: done." Nov 8 00:21:18.813013 dockerd[1670]: time="2025-11-08T00:21:18.811262235Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:21:18.813013 dockerd[1670]: time="2025-11-08T00:21:18.811412778Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:21:18.813013 dockerd[1670]: time="2025-11-08T00:21:18.811562891Z" level=info msg="Daemon has completed initialization" Nov 8 00:21:18.813621 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck513264540-merged.mount: Deactivated successfully. Nov 8 00:21:18.860439 dockerd[1670]: time="2025-11-08T00:21:18.860145674Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:21:18.861181 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:21:19.868016 containerd[1469]: time="2025-11-08T00:21:19.867530313Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 00:21:20.506610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1489216685.mount: Deactivated successfully. Nov 8 00:21:21.689019 sshd[1628]: Connection closed by authenticating user root 140.233.190.96 port 43106 [preauth] Nov 8 00:21:21.692659 systemd[1]: sshd@6-24.199.105.232:22-140.233.190.96:43106.service: Deactivated successfully. Nov 8 00:21:21.800809 containerd[1469]: time="2025-11-08T00:21:21.800731848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:21.802477 containerd[1469]: time="2025-11-08T00:21:21.802182853Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 8 00:21:21.805020 containerd[1469]: time="2025-11-08T00:21:21.803315129Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:21.810456 containerd[1469]: time="2025-11-08T00:21:21.810387624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:21.812440 containerd[1469]: time="2025-11-08T00:21:21.812373758Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.944740804s" Nov 8 00:21:21.812651 containerd[1469]: time="2025-11-08T00:21:21.812626507Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 8 00:21:21.813540 containerd[1469]: time="2025-11-08T00:21:21.813484880Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 00:21:23.474507 containerd[1469]: time="2025-11-08T00:21:23.473253685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:23.474507 containerd[1469]: time="2025-11-08T00:21:23.474235730Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 8 00:21:23.474507 containerd[1469]: time="2025-11-08T00:21:23.474444678Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:23.483671 containerd[1469]: time="2025-11-08T00:21:23.483593982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:23.484696 containerd[1469]: time="2025-11-08T00:21:23.484642367Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.671099369s" Nov 8 00:21:23.484873 containerd[1469]: time="2025-11-08T00:21:23.484854161Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 8 00:21:23.485874 containerd[1469]: time="2025-11-08T00:21:23.485828982Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 00:21:24.738557 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:21:24.746396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:21:24.949437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:21:24.958950 (kubelet)[1895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:21:25.053781 kubelet[1895]: E1108 00:21:25.053144 1895 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:21:25.057815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:21:25.058017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:21:25.075026 containerd[1469]: time="2025-11-08T00:21:25.074930456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:25.076583 containerd[1469]: time="2025-11-08T00:21:25.076525823Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 8 00:21:25.078012 containerd[1469]: time="2025-11-08T00:21:25.076731103Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:25.080036 containerd[1469]: time="2025-11-08T00:21:25.079996046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:25.081711 containerd[1469]: time="2025-11-08T00:21:25.081655204Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.595777635s" Nov 8 00:21:25.081807 containerd[1469]: time="2025-11-08T00:21:25.081717783Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 8 00:21:25.082342 containerd[1469]: time="2025-11-08T00:21:25.082292291Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 00:21:26.368614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3894541213.mount: Deactivated successfully. Nov 8 00:21:27.024652 containerd[1469]: time="2025-11-08T00:21:27.024579372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:27.025792 containerd[1469]: time="2025-11-08T00:21:27.025450370Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 8 00:21:27.027077 containerd[1469]: time="2025-11-08T00:21:27.026922044Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:27.031578 containerd[1469]: time="2025-11-08T00:21:27.031502917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:27.033006 containerd[1469]: time="2025-11-08T00:21:27.032701159Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.950332092s" Nov 8 00:21:27.033006 containerd[1469]: time="2025-11-08T00:21:27.032767978Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 8 00:21:27.033795 containerd[1469]: time="2025-11-08T00:21:27.033661243Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 00:21:27.036157 systemd-resolved[1323]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 8 00:21:27.683603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1332975152.mount: Deactivated successfully. Nov 8 00:21:28.762796 containerd[1469]: time="2025-11-08T00:21:28.761666806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:28.764077 containerd[1469]: time="2025-11-08T00:21:28.764026531Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 8 00:21:28.765211 containerd[1469]: time="2025-11-08T00:21:28.765156325Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:28.769634 containerd[1469]: time="2025-11-08T00:21:28.769560865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:28.771259 containerd[1469]: time="2025-11-08T00:21:28.771208182Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.737482655s" Nov 8 00:21:28.771550 containerd[1469]: time="2025-11-08T00:21:28.771437404Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 8 00:21:28.772666 containerd[1469]: time="2025-11-08T00:21:28.772445513Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:21:29.364565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2236580051.mount: Deactivated successfully. Nov 8 00:21:29.373293 containerd[1469]: time="2025-11-08T00:21:29.372343032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:29.374447 containerd[1469]: time="2025-11-08T00:21:29.374302120Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 8 00:21:29.375500 containerd[1469]: time="2025-11-08T00:21:29.375460667Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:29.378666 containerd[1469]: time="2025-11-08T00:21:29.378609988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:29.380303 containerd[1469]: time="2025-11-08T00:21:29.380241965Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 607.757729ms" Nov 8 00:21:29.380703 containerd[1469]: time="2025-11-08T00:21:29.380555632Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:21:29.381373 containerd[1469]: time="2025-11-08T00:21:29.381340557Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 00:21:30.061004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2604190456.mount: Deactivated successfully. Nov 8 00:21:30.129207 systemd-resolved[1323]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 8 00:21:32.083095 containerd[1469]: time="2025-11-08T00:21:32.083006477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:32.086019 containerd[1469]: time="2025-11-08T00:21:32.084303052Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 8 00:21:32.086589 containerd[1469]: time="2025-11-08T00:21:32.086537643Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:32.097159 containerd[1469]: time="2025-11-08T00:21:32.096912590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:32.099091 containerd[1469]: time="2025-11-08T00:21:32.099017592Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.717635494s" Nov 8 00:21:32.099377 containerd[1469]: time="2025-11-08T00:21:32.099330927Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 8 00:21:35.223412 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:21:35.230319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:21:35.246449 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:21:35.246552 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:21:35.246859 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:21:35.254454 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:21:35.312067 systemd[1]: Reloading requested from client PID 2051 ('systemctl') (unit session-7.scope)... Nov 8 00:21:35.312521 systemd[1]: Reloading... Nov 8 00:21:35.455055 zram_generator::config[2093]: No configuration found. Nov 8 00:21:35.583896 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:21:35.678651 systemd[1]: Reloading finished in 365 ms. Nov 8 00:21:35.745095 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:21:35.745224 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:21:35.745602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:21:35.752665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:21:35.958214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:21:35.968635 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:21:36.040918 kubelet[2145]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:21:36.040918 kubelet[2145]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:21:36.040918 kubelet[2145]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:21:36.041432 kubelet[2145]: I1108 00:21:36.041035 2145 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:21:36.482707 kubelet[2145]: I1108 00:21:36.482645 2145 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:21:36.482707 kubelet[2145]: I1108 00:21:36.482701 2145 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:21:36.483259 kubelet[2145]: I1108 00:21:36.483224 2145 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:21:36.520356 kubelet[2145]: I1108 00:21:36.520259 2145 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:21:36.524228 kubelet[2145]: E1108 00:21:36.524177 2145 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://24.199.105.232:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 24.199.105.232:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:36.530569 kubelet[2145]: E1108 00:21:36.530530 2145 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:21:36.530763 kubelet[2145]: I1108 00:21:36.530751 2145 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:21:36.534452 kubelet[2145]: I1108 00:21:36.534417 2145 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:21:36.536631 kubelet[2145]: I1108 00:21:36.536553 2145 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:21:36.537005 kubelet[2145]: I1108 00:21:36.536798 2145 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-6d313a6df2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:21:36.537146 kubelet[2145]: I1108 00:21:36.537135 2145 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:21:36.537194 kubelet[2145]: I1108 00:21:36.537188 2145 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:21:36.537450 kubelet[2145]: I1108 00:21:36.537386 2145 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:21:36.546032 kubelet[2145]: I1108 00:21:36.545903 2145 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:21:36.546032 kubelet[2145]: I1108 00:21:36.546006 2145 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:21:36.546032 kubelet[2145]: I1108 00:21:36.546037 2145 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:21:36.546032 kubelet[2145]: I1108 00:21:36.546052 2145 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:21:36.553476 kubelet[2145]: W1108 00:21:36.553137 2145 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://24.199.105.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-6d313a6df2&limit=500&resourceVersion=0": dial tcp 24.199.105.232:6443: connect: connection refused Nov 8 00:21:36.553476 kubelet[2145]: E1108 00:21:36.553242 2145 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://24.199.105.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-6d313a6df2&limit=500&resourceVersion=0\": dial tcp 24.199.105.232:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:36.555284 kubelet[2145]: I1108 00:21:36.555095 2145 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:21:36.560140 kubelet[2145]: I1108 00:21:36.560093 2145 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:21:36.562433 kubelet[2145]: W1108 00:21:36.561207 2145 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:21:36.566525 kubelet[2145]: W1108 00:21:36.566217 2145 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://24.199.105.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 24.199.105.232:6443: connect: connection refused Nov 8 00:21:36.566525 kubelet[2145]: E1108 00:21:36.566298 2145 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://24.199.105.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 24.199.105.232:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:36.567483 kubelet[2145]: I1108 00:21:36.567441 2145 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:21:36.567615 kubelet[2145]: I1108 00:21:36.567523 2145 server.go:1287] "Started kubelet" Nov 8 00:21:36.578548 kubelet[2145]: E1108 00:21:36.576714 2145 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://24.199.105.232:6443/api/v1/namespaces/default/events\": dial tcp 24.199.105.232:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-6d313a6df2.1875e02928c1ad92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-6d313a6df2,UID:ci-4081.3.6-n-6d313a6df2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-6d313a6df2,},FirstTimestamp:2025-11-08 00:21:36.567479698 +0000 UTC m=+0.592343720,LastTimestamp:2025-11-08 00:21:36.567479698 +0000 UTC m=+0.592343720,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-6d313a6df2,}" Nov 8 00:21:36.580490 kubelet[2145]: I1108 00:21:36.580384 2145 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:21:36.581210 kubelet[2145]: I1108 00:21:36.581181 2145 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:21:36.583450 kubelet[2145]: I1108 00:21:36.583391 2145 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:21:36.586145 kubelet[2145]: I1108 00:21:36.586094 2145 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:21:36.586783 kubelet[2145]: I1108 00:21:36.586759 2145 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:21:36.589916 kubelet[2145]: I1108 00:21:36.589086 2145 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:21:36.592906 kubelet[2145]: I1108 00:21:36.592876 2145 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:21:36.593451 kubelet[2145]: E1108 00:21:36.593161 2145 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-6d313a6df2\" not found" Nov 8 00:21:36.595659 kubelet[2145]: E1108 00:21:36.595601 2145 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.105.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-6d313a6df2?timeout=10s\": dial tcp 24.199.105.232:6443: connect: connection refused" interval="200ms" Nov 8 00:21:36.597086 kubelet[2145]: I1108 00:21:36.597058 2145 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:21:36.598055 kubelet[2145]: I1108 00:21:36.597511 2145 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:21:36.598055 kubelet[2145]: I1108 00:21:36.597636 2145 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:21:36.605012 kubelet[2145]: I1108 00:21:36.603659 2145 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:21:36.605368 kubelet[2145]: I1108 00:21:36.605339 2145 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:21:36.615722 kubelet[2145]: W1108 00:21:36.615636 2145 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://24.199.105.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.199.105.232:6443: connect: connection refused Nov 8 00:21:36.616033 kubelet[2145]: E1108 00:21:36.615953 2145 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://24.199.105.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 24.199.105.232:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:36.623039 kubelet[2145]: I1108 00:21:36.622950 2145 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:21:36.624607 kubelet[2145]: I1108 00:21:36.624571 2145 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:21:36.624607 kubelet[2145]: I1108 00:21:36.624608 2145 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:21:36.624764 kubelet[2145]: I1108 00:21:36.624636 2145 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:21:36.624764 kubelet[2145]: I1108 00:21:36.624647 2145 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:21:36.624764 kubelet[2145]: E1108 00:21:36.624724 2145 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:21:36.634372 kubelet[2145]: W1108 00:21:36.634295 2145 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://24.199.105.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.199.105.232:6443: connect: connection refused Nov 8 00:21:36.635378 kubelet[2145]: E1108 00:21:36.635329 2145 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://24.199.105.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 24.199.105.232:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:36.636621 kubelet[2145]: E1108 00:21:36.636582 2145 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:21:36.641010 kubelet[2145]: I1108 00:21:36.640893 2145 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:21:36.641352 kubelet[2145]: I1108 00:21:36.641207 2145 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:21:36.641352 kubelet[2145]: I1108 00:21:36.641237 2145 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:21:36.645893 kubelet[2145]: I1108 00:21:36.645422 2145 policy_none.go:49] "None policy: Start" Nov 8 00:21:36.645893 kubelet[2145]: I1108 00:21:36.645477 2145 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:21:36.645893 kubelet[2145]: I1108 00:21:36.645505 2145 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:21:36.654732 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:21:36.673197 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:21:36.678644 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:21:36.687581 kubelet[2145]: I1108 00:21:36.687528 2145 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:21:36.687785 kubelet[2145]: I1108 00:21:36.687746 2145 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:21:36.687785 kubelet[2145]: I1108 00:21:36.687759 2145 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:21:36.689309 kubelet[2145]: I1108 00:21:36.688297 2145 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:21:36.690656 kubelet[2145]: E1108 00:21:36.690626 2145 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:21:36.690774 kubelet[2145]: E1108 00:21:36.690681 2145 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-6d313a6df2\" not found" Nov 8 00:21:36.738796 systemd[1]: Created slice kubepods-burstable-pod74aa6cb969560ecb253e3e131cbe1fd6.slice - libcontainer container kubepods-burstable-pod74aa6cb969560ecb253e3e131cbe1fd6.slice. Nov 8 00:21:36.753165 kubelet[2145]: E1108 00:21:36.753071 2145 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-6d313a6df2\" not found" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:36.757415 systemd[1]: Created slice kubepods-burstable-podbfd37ae30e65b5bf58976763a2b7cfe0.slice - libcontainer container kubepods-burstable-podbfd37ae30e65b5bf58976763a2b7cfe0.slice. Nov 8 00:21:36.760871 kubelet[2145]: E1108 00:21:36.760832 2145 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-6d313a6df2\" not found" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:36.764861 systemd[1]: Created slice kubepods-burstable-pod5aca24b4863479b6cb4f39f748f2030a.slice - libcontainer container kubepods-burstable-pod5aca24b4863479b6cb4f39f748f2030a.slice. Nov 8 00:21:36.766999 kubelet[2145]: E1108 00:21:36.766944 2145 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-6d313a6df2\" not found" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:36.789994 kubelet[2145]: I1108 00:21:36.789541 2145 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:36.789994 kubelet[2145]: E1108 00:21:36.789953 2145 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.199.105.232:6443/api/v1/nodes\": dial tcp 24.199.105.232:6443: connect: connection refused" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:36.796714 kubelet[2145]: E1108 00:21:36.796668 2145 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.105.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-6d313a6df2?timeout=10s\": dial tcp 24.199.105.232:6443: connect: connection refused" interval="400ms" Nov 8 00:21:36.807306 kubelet[2145]: I1108 00:21:36.807227 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74aa6cb969560ecb253e3e131cbe1fd6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-6d313a6df2\" (UID: \"74aa6cb969560ecb253e3e131cbe1fd6\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:36.807306 kubelet[2145]: I1108 00:21:36.807284 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bfd37ae30e65b5bf58976763a2b7cfe0-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-6d313a6df2\" (UID: \"bfd37ae30e65b5bf58976763a2b7cfe0\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:36.807306 kubelet[2145]: I1108 00:21:36.807315 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bfd37ae30e65b5bf58976763a2b7cfe0-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-6d313a6df2\" (UID: \"bfd37ae30e65b5bf58976763a2b7cfe0\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:36.807568 kubelet[2145]: I1108 00:21:36.807335 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74aa6cb969560ecb253e3e131cbe1fd6-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-6d313a6df2\" (UID: \"74aa6cb969560ecb253e3e131cbe1fd6\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:36.807568 kubelet[2145]: I1108 00:21:36.807357 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74aa6cb969560ecb253e3e131cbe1fd6-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-6d313a6df2\" (UID: \"74aa6cb969560ecb253e3e131cbe1fd6\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:36.807568 kubelet[2145]: I1108 00:21:36.807379 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bfd37ae30e65b5bf58976763a2b7cfe0-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-6d313a6df2\" (UID: \"bfd37ae30e65b5bf58976763a2b7cfe0\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:36.807568 kubelet[2145]: I1108 00:21:36.807399 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bfd37ae30e65b5bf58976763a2b7cfe0-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-6d313a6df2\" (UID: \"bfd37ae30e65b5bf58976763a2b7cfe0\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:36.807568 kubelet[2145]: I1108 00:21:36.807419 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bfd37ae30e65b5bf58976763a2b7cfe0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-6d313a6df2\" (UID: \"bfd37ae30e65b5bf58976763a2b7cfe0\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:36.807735 kubelet[2145]: I1108 00:21:36.807446 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5aca24b4863479b6cb4f39f748f2030a-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-6d313a6df2\" (UID: \"5aca24b4863479b6cb4f39f748f2030a\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:36.991614 kubelet[2145]: I1108 00:21:36.991486 2145 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:36.993334 kubelet[2145]: E1108 00:21:36.993291 2145 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.199.105.232:6443/api/v1/nodes\": dial tcp 24.199.105.232:6443: connect: connection refused" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:37.055079 kubelet[2145]: E1108 00:21:37.054656 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:37.056151 containerd[1469]: time="2025-11-08T00:21:37.055805132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-6d313a6df2,Uid:74aa6cb969560ecb253e3e131cbe1fd6,Namespace:kube-system,Attempt:0,}" Nov 8 00:21:37.058659 systemd-resolved[1323]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Nov 8 00:21:37.061681 kubelet[2145]: E1108 00:21:37.061637 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:37.068199 kubelet[2145]: E1108 00:21:37.068152 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:37.071373 containerd[1469]: time="2025-11-08T00:21:37.070915194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-6d313a6df2,Uid:5aca24b4863479b6cb4f39f748f2030a,Namespace:kube-system,Attempt:0,}" Nov 8 00:21:37.071373 containerd[1469]: time="2025-11-08T00:21:37.070932283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-6d313a6df2,Uid:bfd37ae30e65b5bf58976763a2b7cfe0,Namespace:kube-system,Attempt:0,}" Nov 8 00:21:37.198262 kubelet[2145]: E1108 00:21:37.198199 2145 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.105.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-6d313a6df2?timeout=10s\": dial tcp 24.199.105.232:6443: connect: connection refused" interval="800ms" Nov 8 00:21:37.237012 kubelet[2145]: E1108 00:21:37.236812 2145 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://24.199.105.232:6443/api/v1/namespaces/default/events\": dial tcp 24.199.105.232:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-6d313a6df2.1875e02928c1ad92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-6d313a6df2,UID:ci-4081.3.6-n-6d313a6df2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-6d313a6df2,},FirstTimestamp:2025-11-08 00:21:36.567479698 +0000 UTC m=+0.592343720,LastTimestamp:2025-11-08 00:21:36.567479698 +0000 UTC m=+0.592343720,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-6d313a6df2,}" Nov 8 00:21:37.395018 kubelet[2145]: I1108 00:21:37.394956 2145 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:37.395474 kubelet[2145]: E1108 00:21:37.395440 2145 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.199.105.232:6443/api/v1/nodes\": dial tcp 24.199.105.232:6443: connect: connection refused" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:37.421631 kubelet[2145]: W1108 00:21:37.421535 2145 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://24.199.105.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 24.199.105.232:6443: connect: connection refused Nov 8 00:21:37.421811 kubelet[2145]: E1108 00:21:37.421673 2145 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://24.199.105.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 24.199.105.232:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:37.562495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3063176637.mount: Deactivated successfully. Nov 8 00:21:37.569028 containerd[1469]: time="2025-11-08T00:21:37.567760109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:21:37.569216 containerd[1469]: time="2025-11-08T00:21:37.569159774Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:21:37.570015 containerd[1469]: time="2025-11-08T00:21:37.569478450Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:21:37.570301 containerd[1469]: time="2025-11-08T00:21:37.570274605Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:21:37.570919 containerd[1469]: time="2025-11-08T00:21:37.570885633Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:21:37.571854 containerd[1469]: time="2025-11-08T00:21:37.571608952Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:21:37.571854 containerd[1469]: time="2025-11-08T00:21:37.571812285Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:21:37.574715 containerd[1469]: time="2025-11-08T00:21:37.574634298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:21:37.576378 containerd[1469]: time="2025-11-08T00:21:37.576295648Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 505.12078ms" Nov 8 00:21:37.577865 containerd[1469]: time="2025-11-08T00:21:37.577443423Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 521.554946ms" Nov 8 00:21:37.582516 containerd[1469]: time="2025-11-08T00:21:37.582457551Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 511.417409ms" Nov 8 00:21:37.627756 kubelet[2145]: W1108 00:21:37.627663 2145 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://24.199.105.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.199.105.232:6443: connect: connection refused Nov 8 00:21:37.627756 kubelet[2145]: E1108 00:21:37.627722 2145 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://24.199.105.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 24.199.105.232:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:37.757467 containerd[1469]: time="2025-11-08T00:21:37.756899787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:37.757467 containerd[1469]: time="2025-11-08T00:21:37.756966485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:37.757467 containerd[1469]: time="2025-11-08T00:21:37.756987670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:37.757467 containerd[1469]: time="2025-11-08T00:21:37.757076092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:37.763181 containerd[1469]: time="2025-11-08T00:21:37.763059739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:37.763365 containerd[1469]: time="2025-11-08T00:21:37.763201605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:37.763365 containerd[1469]: time="2025-11-08T00:21:37.763236784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:37.763464 containerd[1469]: time="2025-11-08T00:21:37.763415909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:37.772699 containerd[1469]: time="2025-11-08T00:21:37.771189401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:37.772699 containerd[1469]: time="2025-11-08T00:21:37.771255120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:37.772699 containerd[1469]: time="2025-11-08T00:21:37.771266815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:37.772699 containerd[1469]: time="2025-11-08T00:21:37.771351943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:37.793243 systemd[1]: Started cri-containerd-9ccb5c723a2cc37b9d45ac59e26a775ca7b3913da664c674a89f81df587885fa.scope - libcontainer container 9ccb5c723a2cc37b9d45ac59e26a775ca7b3913da664c674a89f81df587885fa. Nov 8 00:21:37.800902 systemd[1]: Started cri-containerd-22ff1b1cb24617191411d4c80f678444fe22f2fa71bfe9fba2b22644f847705b.scope - libcontainer container 22ff1b1cb24617191411d4c80f678444fe22f2fa71bfe9fba2b22644f847705b. Nov 8 00:21:37.824498 systemd[1]: Started cri-containerd-b23d61c9f1b022fe068fdd44c6e86b54eaaaa6234b1fa7dd4d413beb91e93b2b.scope - libcontainer container b23d61c9f1b022fe068fdd44c6e86b54eaaaa6234b1fa7dd4d413beb91e93b2b. Nov 8 00:21:37.900305 containerd[1469]: time="2025-11-08T00:21:37.900097792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-6d313a6df2,Uid:5aca24b4863479b6cb4f39f748f2030a,Namespace:kube-system,Attempt:0,} returns sandbox id \"22ff1b1cb24617191411d4c80f678444fe22f2fa71bfe9fba2b22644f847705b\"" Nov 8 00:21:37.902082 kubelet[2145]: E1108 00:21:37.902029 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:37.910220 containerd[1469]: time="2025-11-08T00:21:37.910096963Z" level=info msg="CreateContainer within sandbox \"22ff1b1cb24617191411d4c80f678444fe22f2fa71bfe9fba2b22644f847705b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:21:37.912529 kubelet[2145]: W1108 00:21:37.912451 2145 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://24.199.105.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.199.105.232:6443: connect: connection refused Nov 8 00:21:37.912529 kubelet[2145]: E1108 00:21:37.912492 2145 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://24.199.105.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 24.199.105.232:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:37.916291 containerd[1469]: time="2025-11-08T00:21:37.916204136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-6d313a6df2,Uid:74aa6cb969560ecb253e3e131cbe1fd6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ccb5c723a2cc37b9d45ac59e26a775ca7b3913da664c674a89f81df587885fa\"" Nov 8 00:21:37.917441 kubelet[2145]: E1108 00:21:37.917419 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:37.919685 containerd[1469]: time="2025-11-08T00:21:37.919434513Z" level=info msg="CreateContainer within sandbox \"9ccb5c723a2cc37b9d45ac59e26a775ca7b3913da664c674a89f81df587885fa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:21:37.931076 containerd[1469]: time="2025-11-08T00:21:37.931019557Z" level=info msg="CreateContainer within sandbox \"22ff1b1cb24617191411d4c80f678444fe22f2fa71bfe9fba2b22644f847705b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"06599ccab3c73b700f63ed85f09a9300ec4e06728cc54f410dda9f91e0c8b1fd\"" Nov 8 00:21:37.932641 containerd[1469]: time="2025-11-08T00:21:37.932481073Z" level=info msg="StartContainer for \"06599ccab3c73b700f63ed85f09a9300ec4e06728cc54f410dda9f91e0c8b1fd\"" Nov 8 00:21:37.934794 containerd[1469]: time="2025-11-08T00:21:37.934754264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-6d313a6df2,Uid:bfd37ae30e65b5bf58976763a2b7cfe0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b23d61c9f1b022fe068fdd44c6e86b54eaaaa6234b1fa7dd4d413beb91e93b2b\"" Nov 8 00:21:37.935696 kubelet[2145]: E1108 00:21:37.935649 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:37.940265 containerd[1469]: time="2025-11-08T00:21:37.940210395Z" level=info msg="CreateContainer within sandbox \"b23d61c9f1b022fe068fdd44c6e86b54eaaaa6234b1fa7dd4d413beb91e93b2b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:21:37.941542 containerd[1469]: time="2025-11-08T00:21:37.941500369Z" level=info msg="CreateContainer within sandbox \"9ccb5c723a2cc37b9d45ac59e26a775ca7b3913da664c674a89f81df587885fa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9bd66225671bc2d63b75361f0427a9e420f8585b2e7ef44ddc252acae0fc5e5e\"" Nov 8 00:21:37.942142 containerd[1469]: time="2025-11-08T00:21:37.942118496Z" level=info msg="StartContainer for \"9bd66225671bc2d63b75361f0427a9e420f8585b2e7ef44ddc252acae0fc5e5e\"" Nov 8 00:21:37.961182 containerd[1469]: time="2025-11-08T00:21:37.961115661Z" level=info msg="CreateContainer within sandbox \"b23d61c9f1b022fe068fdd44c6e86b54eaaaa6234b1fa7dd4d413beb91e93b2b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ab4b796c021b5753fb876779ee505b2bf8ecb2ebc115a39d6742d964f91326a6\"" Nov 8 00:21:37.963107 containerd[1469]: time="2025-11-08T00:21:37.963068711Z" level=info msg="StartContainer for \"ab4b796c021b5753fb876779ee505b2bf8ecb2ebc115a39d6742d964f91326a6\"" Nov 8 00:21:37.984602 systemd[1]: Started cri-containerd-06599ccab3c73b700f63ed85f09a9300ec4e06728cc54f410dda9f91e0c8b1fd.scope - libcontainer container 06599ccab3c73b700f63ed85f09a9300ec4e06728cc54f410dda9f91e0c8b1fd. Nov 8 00:21:37.994010 systemd[1]: Started cri-containerd-9bd66225671bc2d63b75361f0427a9e420f8585b2e7ef44ddc252acae0fc5e5e.scope - libcontainer container 9bd66225671bc2d63b75361f0427a9e420f8585b2e7ef44ddc252acae0fc5e5e. Nov 8 00:21:38.000552 kubelet[2145]: E1108 00:21:37.999865 2145 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.105.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-6d313a6df2?timeout=10s\": dial tcp 24.199.105.232:6443: connect: connection refused" interval="1.6s" Nov 8 00:21:38.046825 systemd[1]: Started cri-containerd-ab4b796c021b5753fb876779ee505b2bf8ecb2ebc115a39d6742d964f91326a6.scope - libcontainer container ab4b796c021b5753fb876779ee505b2bf8ecb2ebc115a39d6742d964f91326a6. Nov 8 00:21:38.052615 kubelet[2145]: W1108 00:21:38.052468 2145 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://24.199.105.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-6d313a6df2&limit=500&resourceVersion=0": dial tcp 24.199.105.232:6443: connect: connection refused Nov 8 00:21:38.052948 kubelet[2145]: E1108 00:21:38.052624 2145 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://24.199.105.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-6d313a6df2&limit=500&resourceVersion=0\": dial tcp 24.199.105.232:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:21:38.082972 containerd[1469]: time="2025-11-08T00:21:38.082908487Z" level=info msg="StartContainer for \"06599ccab3c73b700f63ed85f09a9300ec4e06728cc54f410dda9f91e0c8b1fd\" returns successfully" Nov 8 00:21:38.105390 containerd[1469]: time="2025-11-08T00:21:38.104632450Z" level=info msg="StartContainer for \"9bd66225671bc2d63b75361f0427a9e420f8585b2e7ef44ddc252acae0fc5e5e\" returns successfully" Nov 8 00:21:38.158118 containerd[1469]: time="2025-11-08T00:21:38.155742919Z" level=info msg="StartContainer for \"ab4b796c021b5753fb876779ee505b2bf8ecb2ebc115a39d6742d964f91326a6\" returns successfully" Nov 8 00:21:38.197501 kubelet[2145]: I1108 00:21:38.197458 2145 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:38.198232 kubelet[2145]: E1108 00:21:38.198118 2145 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.199.105.232:6443/api/v1/nodes\": dial tcp 24.199.105.232:6443: connect: connection refused" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:38.650501 kubelet[2145]: E1108 00:21:38.650447 2145 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-6d313a6df2\" not found" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:38.650856 kubelet[2145]: E1108 00:21:38.650661 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:38.654524 kubelet[2145]: E1108 00:21:38.654476 2145 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-6d313a6df2\" not found" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:38.654676 kubelet[2145]: E1108 00:21:38.654636 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:38.657845 kubelet[2145]: E1108 00:21:38.657498 2145 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-6d313a6df2\" not found" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:38.657845 kubelet[2145]: E1108 00:21:38.657713 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:39.662352 kubelet[2145]: E1108 00:21:39.662105 2145 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-6d313a6df2\" not found" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:39.662352 kubelet[2145]: E1108 00:21:39.662173 2145 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-6d313a6df2\" not found" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:39.662352 kubelet[2145]: E1108 00:21:39.662246 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:39.662352 kubelet[2145]: E1108 00:21:39.662295 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:39.800529 kubelet[2145]: I1108 00:21:39.800478 2145 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:40.457366 kubelet[2145]: E1108 00:21:40.457313 2145 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-6d313a6df2\" not found" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:40.486696 kubelet[2145]: I1108 00:21:40.485392 2145 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:40.494328 kubelet[2145]: I1108 00:21:40.494205 2145 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:40.559576 kubelet[2145]: E1108 00:21:40.557858 2145 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-6d313a6df2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:40.559576 kubelet[2145]: I1108 00:21:40.557903 2145 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:40.562420 kubelet[2145]: E1108 00:21:40.562378 2145 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-6d313a6df2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:40.562420 kubelet[2145]: I1108 00:21:40.562414 2145 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:40.565257 kubelet[2145]: E1108 00:21:40.564580 2145 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-6d313a6df2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:40.565788 kubelet[2145]: I1108 00:21:40.565757 2145 apiserver.go:52] "Watching apiserver" Nov 8 00:21:40.598092 kubelet[2145]: I1108 00:21:40.598038 2145 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:21:40.663579 kubelet[2145]: I1108 00:21:40.663536 2145 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:40.666633 kubelet[2145]: E1108 00:21:40.666552 2145 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-6d313a6df2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:40.666833 kubelet[2145]: E1108 00:21:40.666772 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:42.937752 systemd[1]: Reloading requested from client PID 2419 ('systemctl') (unit session-7.scope)... Nov 8 00:21:42.937778 systemd[1]: Reloading... Nov 8 00:21:43.046098 zram_generator::config[2454]: No configuration found. Nov 8 00:21:43.258911 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:21:43.393146 systemd[1]: Reloading finished in 454 ms. Nov 8 00:21:43.451596 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:21:43.467766 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:21:43.468350 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:21:43.468457 systemd[1]: kubelet.service: Consumed 1.107s CPU time, 128.9M memory peak, 0B memory swap peak. Nov 8 00:21:43.478570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:21:43.655158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:21:43.671534 (kubelet)[2509]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:21:43.757583 kubelet[2509]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:21:43.757583 kubelet[2509]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:21:43.757583 kubelet[2509]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:21:43.762367 kubelet[2509]: I1108 00:21:43.757926 2509 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:21:43.780015 kubelet[2509]: I1108 00:21:43.778912 2509 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:21:43.780015 kubelet[2509]: I1108 00:21:43.778996 2509 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:21:43.780015 kubelet[2509]: I1108 00:21:43.779419 2509 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:21:43.784019 kubelet[2509]: I1108 00:21:43.783934 2509 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 00:21:43.788267 kubelet[2509]: I1108 00:21:43.788226 2509 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:21:43.794307 kubelet[2509]: E1108 00:21:43.794266 2509 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:21:43.794499 kubelet[2509]: I1108 00:21:43.794484 2509 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:21:43.799241 kubelet[2509]: I1108 00:21:43.799184 2509 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:21:43.801206 kubelet[2509]: I1108 00:21:43.801073 2509 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:21:43.801703 kubelet[2509]: I1108 00:21:43.801221 2509 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-6d313a6df2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:21:43.801880 kubelet[2509]: I1108 00:21:43.801729 2509 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:21:43.801880 kubelet[2509]: I1108 00:21:43.801752 2509 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:21:43.801880 kubelet[2509]: I1108 00:21:43.801851 2509 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:21:43.802891 kubelet[2509]: I1108 00:21:43.802174 2509 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:21:43.802891 kubelet[2509]: I1108 00:21:43.802213 2509 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:21:43.802891 kubelet[2509]: I1108 00:21:43.802243 2509 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:21:43.802891 kubelet[2509]: I1108 00:21:43.802261 2509 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:21:43.804752 kubelet[2509]: I1108 00:21:43.804720 2509 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:21:43.805359 kubelet[2509]: I1108 00:21:43.805337 2509 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:21:43.806090 kubelet[2509]: I1108 00:21:43.806069 2509 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:21:43.806215 kubelet[2509]: I1108 00:21:43.806206 2509 server.go:1287] "Started kubelet" Nov 8 00:21:43.808485 kubelet[2509]: I1108 00:21:43.808458 2509 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:21:43.820809 kubelet[2509]: I1108 00:21:43.820750 2509 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:21:43.823075 kubelet[2509]: I1108 00:21:43.823040 2509 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:21:43.825014 kubelet[2509]: I1108 00:21:43.824668 2509 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:21:43.825014 kubelet[2509]: I1108 00:21:43.824946 2509 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:21:43.825531 kubelet[2509]: I1108 00:21:43.825506 2509 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:21:43.828321 kubelet[2509]: I1108 00:21:43.828290 2509 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:21:43.829400 kubelet[2509]: E1108 00:21:43.828786 2509 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-6d313a6df2\" not found" Nov 8 00:21:43.832333 kubelet[2509]: I1108 00:21:43.832294 2509 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:21:43.832647 kubelet[2509]: I1108 00:21:43.832634 2509 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:21:43.838015 kubelet[2509]: I1108 00:21:43.837934 2509 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:21:43.840169 kubelet[2509]: I1108 00:21:43.840129 2509 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:21:43.840388 kubelet[2509]: I1108 00:21:43.840376 2509 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:21:43.840496 kubelet[2509]: I1108 00:21:43.840484 2509 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:21:43.840554 kubelet[2509]: I1108 00:21:43.840547 2509 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:21:43.840709 kubelet[2509]: E1108 00:21:43.840675 2509 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:21:43.845630 kubelet[2509]: I1108 00:21:43.845573 2509 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:21:43.846556 kubelet[2509]: I1108 00:21:43.845833 2509 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:21:43.865045 kubelet[2509]: I1108 00:21:43.862614 2509 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:21:43.873426 kubelet[2509]: E1108 00:21:43.862740 2509 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:21:43.941316 kubelet[2509]: E1108 00:21:43.940842 2509 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:21:43.959270 kubelet[2509]: I1108 00:21:43.959181 2509 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:21:43.961725 kubelet[2509]: I1108 00:21:43.959499 2509 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:21:43.961725 kubelet[2509]: I1108 00:21:43.959550 2509 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:21:43.961725 kubelet[2509]: I1108 00:21:43.959776 2509 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:21:43.961725 kubelet[2509]: I1108 00:21:43.959790 2509 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:21:43.961725 kubelet[2509]: I1108 00:21:43.959812 2509 policy_none.go:49] "None policy: Start" Nov 8 00:21:43.961725 kubelet[2509]: I1108 00:21:43.959828 2509 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:21:43.961725 kubelet[2509]: I1108 00:21:43.959839 2509 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:21:43.961725 kubelet[2509]: I1108 00:21:43.960031 2509 state_mem.go:75] "Updated machine memory state" Nov 8 00:21:43.972255 kubelet[2509]: I1108 00:21:43.970856 2509 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:21:43.972255 kubelet[2509]: I1108 00:21:43.971092 2509 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:21:43.972255 kubelet[2509]: I1108 00:21:43.971109 2509 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:21:43.972255 kubelet[2509]: I1108 00:21:43.971518 2509 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:21:43.982809 kubelet[2509]: E1108 00:21:43.981837 2509 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:21:44.088037 kubelet[2509]: I1108 00:21:44.087997 2509 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:44.101659 kubelet[2509]: I1108 00:21:44.100325 2509 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:44.101659 kubelet[2509]: I1108 00:21:44.100469 2509 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:44.142690 kubelet[2509]: I1108 00:21:44.142234 2509 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:44.144670 kubelet[2509]: I1108 00:21:44.144626 2509 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:44.144874 kubelet[2509]: I1108 00:21:44.144852 2509 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:44.153363 kubelet[2509]: W1108 00:21:44.153329 2509 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 00:21:44.153912 kubelet[2509]: W1108 00:21:44.153882 2509 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 00:21:44.157147 kubelet[2509]: W1108 00:21:44.157107 2509 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 00:21:44.235426 kubelet[2509]: I1108 00:21:44.235016 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bfd37ae30e65b5bf58976763a2b7cfe0-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-6d313a6df2\" (UID: \"bfd37ae30e65b5bf58976763a2b7cfe0\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:44.235426 kubelet[2509]: I1108 00:21:44.235067 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bfd37ae30e65b5bf58976763a2b7cfe0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-6d313a6df2\" (UID: \"bfd37ae30e65b5bf58976763a2b7cfe0\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:44.235426 kubelet[2509]: I1108 00:21:44.235089 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5aca24b4863479b6cb4f39f748f2030a-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-6d313a6df2\" (UID: \"5aca24b4863479b6cb4f39f748f2030a\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:44.235426 kubelet[2509]: I1108 00:21:44.235116 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74aa6cb969560ecb253e3e131cbe1fd6-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-6d313a6df2\" (UID: \"74aa6cb969560ecb253e3e131cbe1fd6\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:44.235426 kubelet[2509]: I1108 00:21:44.235170 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74aa6cb969560ecb253e3e131cbe1fd6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-6d313a6df2\" (UID: \"74aa6cb969560ecb253e3e131cbe1fd6\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:44.235799 kubelet[2509]: I1108 00:21:44.235201 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bfd37ae30e65b5bf58976763a2b7cfe0-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-6d313a6df2\" (UID: \"bfd37ae30e65b5bf58976763a2b7cfe0\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:44.235799 kubelet[2509]: I1108 00:21:44.235223 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74aa6cb969560ecb253e3e131cbe1fd6-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-6d313a6df2\" (UID: \"74aa6cb969560ecb253e3e131cbe1fd6\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:44.235799 kubelet[2509]: I1108 00:21:44.235285 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bfd37ae30e65b5bf58976763a2b7cfe0-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-6d313a6df2\" (UID: \"bfd37ae30e65b5bf58976763a2b7cfe0\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:44.235799 kubelet[2509]: I1108 00:21:44.235333 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bfd37ae30e65b5bf58976763a2b7cfe0-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-6d313a6df2\" (UID: \"bfd37ae30e65b5bf58976763a2b7cfe0\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:44.457094 kubelet[2509]: E1108 00:21:44.456376 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:44.457094 kubelet[2509]: E1108 00:21:44.456744 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:44.458182 kubelet[2509]: E1108 00:21:44.458016 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:44.826319 kubelet[2509]: I1108 00:21:44.826255 2509 apiserver.go:52] "Watching apiserver" Nov 8 00:21:44.838191 kubelet[2509]: I1108 00:21:44.835920 2509 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:21:44.916032 kubelet[2509]: E1108 00:21:44.913173 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:44.916032 kubelet[2509]: I1108 00:21:44.913353 2509 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:44.916032 kubelet[2509]: E1108 00:21:44.913779 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:44.929152 kubelet[2509]: W1108 00:21:44.928723 2509 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 00:21:44.929152 kubelet[2509]: E1108 00:21:44.928817 2509 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-6d313a6df2\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d313a6df2" Nov 8 00:21:44.929152 kubelet[2509]: E1108 00:21:44.929103 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:44.980831 kubelet[2509]: I1108 00:21:44.980629 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d313a6df2" podStartSLOduration=0.980588084 podStartE2EDuration="980.588084ms" podCreationTimestamp="2025-11-08 00:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:21:44.980353444 +0000 UTC m=+1.298670978" watchObservedRunningTime="2025-11-08 00:21:44.980588084 +0000 UTC m=+1.298905611" Nov 8 00:21:45.023260 kubelet[2509]: I1108 00:21:45.022926 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-6d313a6df2" podStartSLOduration=1.02289896 podStartE2EDuration="1.02289896s" podCreationTimestamp="2025-11-08 00:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:21:45.003113553 +0000 UTC m=+1.321431105" watchObservedRunningTime="2025-11-08 00:21:45.02289896 +0000 UTC m=+1.341216596" Nov 8 00:21:45.044363 kubelet[2509]: I1108 00:21:45.044270 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d313a6df2" podStartSLOduration=1.04424179 podStartE2EDuration="1.04424179s" podCreationTimestamp="2025-11-08 00:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:21:45.024638718 +0000 UTC m=+1.342956289" watchObservedRunningTime="2025-11-08 00:21:45.04424179 +0000 UTC m=+1.362559340" Nov 8 00:21:45.916408 kubelet[2509]: E1108 00:21:45.915597 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:45.918224 kubelet[2509]: E1108 00:21:45.918172 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:46.918393 kubelet[2509]: E1108 00:21:46.918301 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:47.919524 kubelet[2509]: E1108 00:21:47.919481 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:49.395623 systemd-timesyncd[1342]: Contacted time server 108.61.215.221:123 (2.flatcar.pool.ntp.org). Nov 8 00:21:49.395701 systemd-timesyncd[1342]: Initial clock synchronization to Sat 2025-11-08 00:21:49.395316 UTC. Nov 8 00:21:49.395908 systemd-resolved[1323]: Clock change detected. Flushing caches. Nov 8 00:21:49.484449 kubelet[2509]: E1108 00:21:49.483259 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:49.698532 kubelet[2509]: I1108 00:21:49.698378 2509 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:21:49.700100 containerd[1469]: time="2025-11-08T00:21:49.699346798Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:21:49.700918 kubelet[2509]: I1108 00:21:49.699621 2509 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:21:50.447087 kubelet[2509]: E1108 00:21:50.447037 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:50.678620 systemd[1]: Created slice kubepods-besteffort-pod64bebd46_5980_47c1_9a1a_64caca76e1a4.slice - libcontainer container kubepods-besteffort-pod64bebd46_5980_47c1_9a1a_64caca76e1a4.slice. Nov 8 00:21:50.800078 kubelet[2509]: I1108 00:21:50.799902 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/64bebd46-5980-47c1-9a1a-64caca76e1a4-kube-proxy\") pod \"kube-proxy-tpg62\" (UID: \"64bebd46-5980-47c1-9a1a-64caca76e1a4\") " pod="kube-system/kube-proxy-tpg62" Nov 8 00:21:50.801117 kubelet[2509]: I1108 00:21:50.800371 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64bebd46-5980-47c1-9a1a-64caca76e1a4-xtables-lock\") pod \"kube-proxy-tpg62\" (UID: \"64bebd46-5980-47c1-9a1a-64caca76e1a4\") " pod="kube-system/kube-proxy-tpg62" Nov 8 00:21:50.801117 kubelet[2509]: I1108 00:21:50.800435 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64bebd46-5980-47c1-9a1a-64caca76e1a4-lib-modules\") pod \"kube-proxy-tpg62\" (UID: \"64bebd46-5980-47c1-9a1a-64caca76e1a4\") " pod="kube-system/kube-proxy-tpg62" Nov 8 00:21:50.801117 kubelet[2509]: I1108 00:21:50.800481 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx6gk\" (UniqueName: \"kubernetes.io/projected/64bebd46-5980-47c1-9a1a-64caca76e1a4-kube-api-access-cx6gk\") pod \"kube-proxy-tpg62\" (UID: \"64bebd46-5980-47c1-9a1a-64caca76e1a4\") " pod="kube-system/kube-proxy-tpg62" Nov 8 00:21:50.810790 systemd[1]: Created slice kubepods-besteffort-pode3f54b82_62f1_4c56_8b87_13aa372b1e09.slice - libcontainer container kubepods-besteffort-pode3f54b82_62f1_4c56_8b87_13aa372b1e09.slice. Nov 8 00:21:50.902609 kubelet[2509]: I1108 00:21:50.901713 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n64sp\" (UniqueName: \"kubernetes.io/projected/e3f54b82-62f1-4c56-8b87-13aa372b1e09-kube-api-access-n64sp\") pod \"tigera-operator-7dcd859c48-zc9vw\" (UID: \"e3f54b82-62f1-4c56-8b87-13aa372b1e09\") " pod="tigera-operator/tigera-operator-7dcd859c48-zc9vw" Nov 8 00:21:50.902609 kubelet[2509]: I1108 00:21:50.901907 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e3f54b82-62f1-4c56-8b87-13aa372b1e09-var-lib-calico\") pod \"tigera-operator-7dcd859c48-zc9vw\" (UID: \"e3f54b82-62f1-4c56-8b87-13aa372b1e09\") " pod="tigera-operator/tigera-operator-7dcd859c48-zc9vw" Nov 8 00:21:50.989413 kubelet[2509]: E1108 00:21:50.989168 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:50.991404 containerd[1469]: time="2025-11-08T00:21:50.990581590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tpg62,Uid:64bebd46-5980-47c1-9a1a-64caca76e1a4,Namespace:kube-system,Attempt:0,}" Nov 8 00:21:51.035017 containerd[1469]: time="2025-11-08T00:21:51.034856945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:51.035382 containerd[1469]: time="2025-11-08T00:21:51.034998295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:51.035382 containerd[1469]: time="2025-11-08T00:21:51.035057312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:51.035382 containerd[1469]: time="2025-11-08T00:21:51.035300240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:51.076564 systemd[1]: Started cri-containerd-d18496b567e248fd174e96a9be9813f2444ca3235bb45d036d15507f8e6b5e36.scope - libcontainer container d18496b567e248fd174e96a9be9813f2444ca3235bb45d036d15507f8e6b5e36. Nov 8 00:21:51.120010 containerd[1469]: time="2025-11-08T00:21:51.119754156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-zc9vw,Uid:e3f54b82-62f1-4c56-8b87-13aa372b1e09,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:21:51.130462 containerd[1469]: time="2025-11-08T00:21:51.130395353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tpg62,Uid:64bebd46-5980-47c1-9a1a-64caca76e1a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d18496b567e248fd174e96a9be9813f2444ca3235bb45d036d15507f8e6b5e36\"" Nov 8 00:21:51.132308 kubelet[2509]: E1108 00:21:51.131907 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:51.144725 containerd[1469]: time="2025-11-08T00:21:51.144498396Z" level=info msg="CreateContainer within sandbox \"d18496b567e248fd174e96a9be9813f2444ca3235bb45d036d15507f8e6b5e36\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:21:51.164849 containerd[1469]: time="2025-11-08T00:21:51.164738711Z" level=info msg="CreateContainer within sandbox \"d18496b567e248fd174e96a9be9813f2444ca3235bb45d036d15507f8e6b5e36\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6545abf94b8902be47cd60ab155b8275ff7be149ad0754e0df5b06222cbf6862\"" Nov 8 00:21:51.170213 containerd[1469]: time="2025-11-08T00:21:51.167559573Z" level=info msg="StartContainer for \"6545abf94b8902be47cd60ab155b8275ff7be149ad0754e0df5b06222cbf6862\"" Nov 8 00:21:51.172428 containerd[1469]: time="2025-11-08T00:21:51.171234188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:51.172664 containerd[1469]: time="2025-11-08T00:21:51.172627384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:51.172751 containerd[1469]: time="2025-11-08T00:21:51.172730409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:51.173025 containerd[1469]: time="2025-11-08T00:21:51.172915447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:51.203650 systemd[1]: Started cri-containerd-3b5c650c3455fa7f602c8c9f10cc5cb2297868e78a3706e8ae7d0815a3ee1fa3.scope - libcontainer container 3b5c650c3455fa7f602c8c9f10cc5cb2297868e78a3706e8ae7d0815a3ee1fa3. Nov 8 00:21:51.211082 systemd[1]: Started cri-containerd-6545abf94b8902be47cd60ab155b8275ff7be149ad0754e0df5b06222cbf6862.scope - libcontainer container 6545abf94b8902be47cd60ab155b8275ff7be149ad0754e0df5b06222cbf6862. Nov 8 00:21:51.276079 containerd[1469]: time="2025-11-08T00:21:51.275907702Z" level=info msg="StartContainer for \"6545abf94b8902be47cd60ab155b8275ff7be149ad0754e0df5b06222cbf6862\" returns successfully" Nov 8 00:21:51.287896 containerd[1469]: time="2025-11-08T00:21:51.287836642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-zc9vw,Uid:e3f54b82-62f1-4c56-8b87-13aa372b1e09,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3b5c650c3455fa7f602c8c9f10cc5cb2297868e78a3706e8ae7d0815a3ee1fa3\"" Nov 8 00:21:51.292912 containerd[1469]: time="2025-11-08T00:21:51.292853242Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:21:51.454289 kubelet[2509]: E1108 00:21:51.453776 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:51.454289 kubelet[2509]: E1108 00:21:51.453853 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:51.566583 kubelet[2509]: E1108 00:21:51.565494 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:51.587447 kubelet[2509]: I1108 00:21:51.587373 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tpg62" podStartSLOduration=1.5873426529999999 podStartE2EDuration="1.587342653s" podCreationTimestamp="2025-11-08 00:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:21:51.478918979 +0000 UTC m=+7.275019241" watchObservedRunningTime="2025-11-08 00:21:51.587342653 +0000 UTC m=+7.383442916" Nov 8 00:21:51.920265 systemd[1]: run-containerd-runc-k8s.io-d18496b567e248fd174e96a9be9813f2444ca3235bb45d036d15507f8e6b5e36-runc.9JOafY.mount: Deactivated successfully. Nov 8 00:21:52.458071 kubelet[2509]: E1108 00:21:52.456575 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:53.064169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3495255520.mount: Deactivated successfully. Nov 8 00:21:55.339068 containerd[1469]: time="2025-11-08T00:21:55.337796983Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:55.339068 containerd[1469]: time="2025-11-08T00:21:55.338963900Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:21:55.339931 containerd[1469]: time="2025-11-08T00:21:55.339755821Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:55.343884 containerd[1469]: time="2025-11-08T00:21:55.343809981Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:55.345228 containerd[1469]: time="2025-11-08T00:21:55.345117266Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.051759693s" Nov 8 00:21:55.345228 containerd[1469]: time="2025-11-08T00:21:55.345179341Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:21:55.353949 containerd[1469]: time="2025-11-08T00:21:55.353711409Z" level=info msg="CreateContainer within sandbox \"3b5c650c3455fa7f602c8c9f10cc5cb2297868e78a3706e8ae7d0815a3ee1fa3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:21:55.372383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2391237297.mount: Deactivated successfully. Nov 8 00:21:55.380590 containerd[1469]: time="2025-11-08T00:21:55.380512122Z" level=info msg="CreateContainer within sandbox \"3b5c650c3455fa7f602c8c9f10cc5cb2297868e78a3706e8ae7d0815a3ee1fa3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a9cbeb62bdb3e1192f41a5940d502921a73b3b3b60bc194e2748843a95930471\"" Nov 8 00:21:55.382148 containerd[1469]: time="2025-11-08T00:21:55.381665522Z" level=info msg="StartContainer for \"a9cbeb62bdb3e1192f41a5940d502921a73b3b3b60bc194e2748843a95930471\"" Nov 8 00:21:55.429413 systemd[1]: run-containerd-runc-k8s.io-a9cbeb62bdb3e1192f41a5940d502921a73b3b3b60bc194e2748843a95930471-runc.MoxYo5.mount: Deactivated successfully. Nov 8 00:21:55.440580 systemd[1]: Started cri-containerd-a9cbeb62bdb3e1192f41a5940d502921a73b3b3b60bc194e2748843a95930471.scope - libcontainer container a9cbeb62bdb3e1192f41a5940d502921a73b3b3b60bc194e2748843a95930471. Nov 8 00:21:55.487631 containerd[1469]: time="2025-11-08T00:21:55.486147506Z" level=info msg="StartContainer for \"a9cbeb62bdb3e1192f41a5940d502921a73b3b3b60bc194e2748843a95930471\" returns successfully" Nov 8 00:21:56.508475 kubelet[2509]: I1108 00:21:56.506420 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-zc9vw" podStartSLOduration=2.449995798 podStartE2EDuration="6.506390999s" podCreationTimestamp="2025-11-08 00:21:50 +0000 UTC" firstStartedPulling="2025-11-08 00:21:51.290698782 +0000 UTC m=+7.086799024" lastFinishedPulling="2025-11-08 00:21:55.347093965 +0000 UTC m=+11.143194225" observedRunningTime="2025-11-08 00:21:56.506222705 +0000 UTC m=+12.302322979" watchObservedRunningTime="2025-11-08 00:21:56.506390999 +0000 UTC m=+12.302491262" Nov 8 00:21:57.349242 kubelet[2509]: E1108 00:21:57.348115 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:57.496243 kubelet[2509]: E1108 00:21:57.496080 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:21:57.955145 update_engine[1446]: I20251108 00:21:57.954263 1446 update_attempter.cc:509] Updating boot flags... Nov 8 00:21:58.049007 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2852) Nov 8 00:21:58.160507 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2850) Nov 8 00:22:03.660609 sudo[1653]: pam_unix(sudo:session): session closed for user root Nov 8 00:22:03.666935 sshd[1650]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:03.676756 systemd[1]: sshd@7-24.199.105.232:22-139.178.68.195:45828.service: Deactivated successfully. Nov 8 00:22:03.683810 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:22:03.685126 systemd[1]: session-7.scope: Consumed 6.107s CPU time, 147.0M memory peak, 0B memory swap peak. Nov 8 00:22:03.687609 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:22:03.693015 systemd-logind[1444]: Removed session 7. Nov 8 00:22:12.746531 systemd[1]: Created slice kubepods-besteffort-pod72b72d69_c9ce_4e96_bb74_4e88deecbdaf.slice - libcontainer container kubepods-besteffort-pod72b72d69_c9ce_4e96_bb74_4e88deecbdaf.slice. Nov 8 00:22:12.772715 kubelet[2509]: I1108 00:22:12.772442 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72b72d69-c9ce-4e96-bb74-4e88deecbdaf-tigera-ca-bundle\") pod \"calico-typha-75866fc76d-rmhxq\" (UID: \"72b72d69-c9ce-4e96-bb74-4e88deecbdaf\") " pod="calico-system/calico-typha-75866fc76d-rmhxq" Nov 8 00:22:12.772715 kubelet[2509]: I1108 00:22:12.772524 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/72b72d69-c9ce-4e96-bb74-4e88deecbdaf-typha-certs\") pod \"calico-typha-75866fc76d-rmhxq\" (UID: \"72b72d69-c9ce-4e96-bb74-4e88deecbdaf\") " pod="calico-system/calico-typha-75866fc76d-rmhxq" Nov 8 00:22:12.772715 kubelet[2509]: I1108 00:22:12.772587 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86rd8\" (UniqueName: \"kubernetes.io/projected/72b72d69-c9ce-4e96-bb74-4e88deecbdaf-kube-api-access-86rd8\") pod \"calico-typha-75866fc76d-rmhxq\" (UID: \"72b72d69-c9ce-4e96-bb74-4e88deecbdaf\") " pod="calico-system/calico-typha-75866fc76d-rmhxq" Nov 8 00:22:13.065658 kubelet[2509]: E1108 00:22:13.065466 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:13.078457 containerd[1469]: time="2025-11-08T00:22:13.078389262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75866fc76d-rmhxq,Uid:72b72d69-c9ce-4e96-bb74-4e88deecbdaf,Namespace:calico-system,Attempt:0,}" Nov 8 00:22:13.129874 systemd[1]: Created slice kubepods-besteffort-podef256717_c0c0_4295_83b5_246fc22cf3d8.slice - libcontainer container kubepods-besteffort-podef256717_c0c0_4295_83b5_246fc22cf3d8.slice. Nov 8 00:22:13.162880 containerd[1469]: time="2025-11-08T00:22:13.162730787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:22:13.163117 containerd[1469]: time="2025-11-08T00:22:13.162899389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:22:13.163117 containerd[1469]: time="2025-11-08T00:22:13.162956221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:13.163252 containerd[1469]: time="2025-11-08T00:22:13.163135712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:13.192032 kubelet[2509]: I1108 00:22:13.190965 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef256717-c0c0-4295-83b5-246fc22cf3d8-lib-modules\") pod \"calico-node-8fz6q\" (UID: \"ef256717-c0c0-4295-83b5-246fc22cf3d8\") " pod="calico-system/calico-node-8fz6q" Nov 8 00:22:13.192032 kubelet[2509]: I1108 00:22:13.191037 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef256717-c0c0-4295-83b5-246fc22cf3d8-tigera-ca-bundle\") pod \"calico-node-8fz6q\" (UID: \"ef256717-c0c0-4295-83b5-246fc22cf3d8\") " pod="calico-system/calico-node-8fz6q" Nov 8 00:22:13.192032 kubelet[2509]: I1108 00:22:13.191071 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wdw7\" (UniqueName: \"kubernetes.io/projected/ef256717-c0c0-4295-83b5-246fc22cf3d8-kube-api-access-8wdw7\") pod \"calico-node-8fz6q\" (UID: \"ef256717-c0c0-4295-83b5-246fc22cf3d8\") " pod="calico-system/calico-node-8fz6q" Nov 8 00:22:13.194122 kubelet[2509]: I1108 00:22:13.192709 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ef256717-c0c0-4295-83b5-246fc22cf3d8-cni-bin-dir\") pod \"calico-node-8fz6q\" (UID: \"ef256717-c0c0-4295-83b5-246fc22cf3d8\") " pod="calico-system/calico-node-8fz6q" Nov 8 00:22:13.194122 kubelet[2509]: I1108 00:22:13.193073 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef256717-c0c0-4295-83b5-246fc22cf3d8-xtables-lock\") pod \"calico-node-8fz6q\" (UID: \"ef256717-c0c0-4295-83b5-246fc22cf3d8\") " pod="calico-system/calico-node-8fz6q" Nov 8 00:22:13.194122 kubelet[2509]: I1108 00:22:13.193104 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ef256717-c0c0-4295-83b5-246fc22cf3d8-cni-log-dir\") pod \"calico-node-8fz6q\" (UID: \"ef256717-c0c0-4295-83b5-246fc22cf3d8\") " pod="calico-system/calico-node-8fz6q" Nov 8 00:22:13.194122 kubelet[2509]: I1108 00:22:13.193164 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ef256717-c0c0-4295-83b5-246fc22cf3d8-var-lib-calico\") pod \"calico-node-8fz6q\" (UID: \"ef256717-c0c0-4295-83b5-246fc22cf3d8\") " pod="calico-system/calico-node-8fz6q" Nov 8 00:22:13.194882 kubelet[2509]: I1108 00:22:13.194600 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ef256717-c0c0-4295-83b5-246fc22cf3d8-var-run-calico\") pod \"calico-node-8fz6q\" (UID: \"ef256717-c0c0-4295-83b5-246fc22cf3d8\") " pod="calico-system/calico-node-8fz6q" Nov 8 00:22:13.194882 kubelet[2509]: I1108 00:22:13.194678 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ef256717-c0c0-4295-83b5-246fc22cf3d8-cni-net-dir\") pod \"calico-node-8fz6q\" (UID: \"ef256717-c0c0-4295-83b5-246fc22cf3d8\") " pod="calico-system/calico-node-8fz6q" Nov 8 00:22:13.194882 kubelet[2509]: I1108 00:22:13.194707 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ef256717-c0c0-4295-83b5-246fc22cf3d8-policysync\") pod \"calico-node-8fz6q\" (UID: \"ef256717-c0c0-4295-83b5-246fc22cf3d8\") " pod="calico-system/calico-node-8fz6q" Nov 8 00:22:13.194882 kubelet[2509]: I1108 00:22:13.194731 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ef256717-c0c0-4295-83b5-246fc22cf3d8-flexvol-driver-host\") pod \"calico-node-8fz6q\" (UID: \"ef256717-c0c0-4295-83b5-246fc22cf3d8\") " pod="calico-system/calico-node-8fz6q" Nov 8 00:22:13.194882 kubelet[2509]: I1108 00:22:13.194758 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ef256717-c0c0-4295-83b5-246fc22cf3d8-node-certs\") pod \"calico-node-8fz6q\" (UID: \"ef256717-c0c0-4295-83b5-246fc22cf3d8\") " pod="calico-system/calico-node-8fz6q" Nov 8 00:22:13.229264 systemd[1]: Started cri-containerd-20f226f0e17ff346e452c19c2d9df37e44ffac793e42a0c8e83ed60c38d632f4.scope - libcontainer container 20f226f0e17ff346e452c19c2d9df37e44ffac793e42a0c8e83ed60c38d632f4. Nov 8 00:22:13.268592 kubelet[2509]: E1108 00:22:13.266874 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7q5q2" podUID="daa9f29d-2835-4e9f-8181-7aeaf654817a" Nov 8 00:22:13.308237 kubelet[2509]: E1108 00:22:13.306381 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.308237 kubelet[2509]: W1108 00:22:13.306450 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.310677 kubelet[2509]: E1108 00:22:13.309655 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.311955 kubelet[2509]: E1108 00:22:13.310989 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.311955 kubelet[2509]: W1108 00:22:13.311020 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.311955 kubelet[2509]: E1108 00:22:13.311065 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.312713 kubelet[2509]: E1108 00:22:13.312677 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.313628 kubelet[2509]: W1108 00:22:13.312999 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.313628 kubelet[2509]: E1108 00:22:13.313059 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.315469 kubelet[2509]: E1108 00:22:13.315265 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.315469 kubelet[2509]: W1108 00:22:13.315297 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.315469 kubelet[2509]: E1108 00:22:13.315362 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.317735 kubelet[2509]: E1108 00:22:13.316914 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.317735 kubelet[2509]: W1108 00:22:13.316945 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.317735 kubelet[2509]: E1108 00:22:13.316973 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.318135 kubelet[2509]: E1108 00:22:13.318093 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.318135 kubelet[2509]: W1108 00:22:13.318124 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.318303 kubelet[2509]: E1108 00:22:13.318153 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.369397 kubelet[2509]: E1108 00:22:13.369355 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.370110 kubelet[2509]: W1108 00:22:13.369955 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.370246 kubelet[2509]: E1108 00:22:13.370001 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.378254 kubelet[2509]: E1108 00:22:13.374779 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.378254 kubelet[2509]: W1108 00:22:13.374809 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.378254 kubelet[2509]: E1108 00:22:13.374843 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.378254 kubelet[2509]: E1108 00:22:13.377418 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.378254 kubelet[2509]: W1108 00:22:13.377444 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.378254 kubelet[2509]: E1108 00:22:13.377589 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.379459 kubelet[2509]: E1108 00:22:13.379424 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.379459 kubelet[2509]: W1108 00:22:13.379450 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.380281 kubelet[2509]: E1108 00:22:13.379855 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.382893 kubelet[2509]: E1108 00:22:13.382851 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.382893 kubelet[2509]: W1108 00:22:13.382875 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.382893 kubelet[2509]: E1108 00:22:13.382901 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.386499 kubelet[2509]: E1108 00:22:13.386455 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.386499 kubelet[2509]: W1108 00:22:13.386485 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.386972 kubelet[2509]: E1108 00:22:13.386528 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.386972 kubelet[2509]: E1108 00:22:13.386960 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.387062 kubelet[2509]: W1108 00:22:13.386977 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.387062 kubelet[2509]: E1108 00:22:13.386998 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.388425 kubelet[2509]: E1108 00:22:13.388383 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.388425 kubelet[2509]: W1108 00:22:13.388406 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.388425 kubelet[2509]: E1108 00:22:13.388428 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.390706 kubelet[2509]: E1108 00:22:13.390669 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.390706 kubelet[2509]: W1108 00:22:13.390695 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.390920 kubelet[2509]: E1108 00:22:13.390721 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.392962 kubelet[2509]: E1108 00:22:13.392915 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.392962 kubelet[2509]: W1108 00:22:13.392944 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.393244 kubelet[2509]: E1108 00:22:13.392999 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.394324 kubelet[2509]: E1108 00:22:13.394282 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.396423 kubelet[2509]: W1108 00:22:13.395520 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.396833 kubelet[2509]: E1108 00:22:13.396621 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.397434 kubelet[2509]: E1108 00:22:13.397219 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.397434 kubelet[2509]: W1108 00:22:13.397242 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.397434 kubelet[2509]: E1108 00:22:13.397277 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.397879 kubelet[2509]: E1108 00:22:13.397755 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.398009 kubelet[2509]: W1108 00:22:13.397957 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.398220 kubelet[2509]: E1108 00:22:13.398184 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.399229 kubelet[2509]: E1108 00:22:13.398896 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.399380 kubelet[2509]: W1108 00:22:13.399356 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.399591 kubelet[2509]: E1108 00:22:13.399482 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.400559 kubelet[2509]: E1108 00:22:13.400355 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.400559 kubelet[2509]: W1108 00:22:13.400378 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.400559 kubelet[2509]: E1108 00:22:13.400399 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.402009 kubelet[2509]: E1108 00:22:13.401872 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.402009 kubelet[2509]: W1108 00:22:13.401896 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.402009 kubelet[2509]: E1108 00:22:13.401927 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.403017 kubelet[2509]: E1108 00:22:13.402755 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.403017 kubelet[2509]: W1108 00:22:13.402778 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.403017 kubelet[2509]: E1108 00:22:13.402809 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.404073 kubelet[2509]: E1108 00:22:13.403705 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.404073 kubelet[2509]: W1108 00:22:13.403729 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.404073 kubelet[2509]: E1108 00:22:13.403751 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.405100 kubelet[2509]: E1108 00:22:13.404649 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.405100 kubelet[2509]: W1108 00:22:13.404675 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.405100 kubelet[2509]: E1108 00:22:13.404699 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.405836 kubelet[2509]: E1108 00:22:13.405716 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.406454 kubelet[2509]: W1108 00:22:13.405962 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.406454 kubelet[2509]: E1108 00:22:13.406000 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.406896 kubelet[2509]: E1108 00:22:13.406864 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.407084 kubelet[2509]: W1108 00:22:13.406987 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.407084 kubelet[2509]: E1108 00:22:13.407017 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.408006 kubelet[2509]: E1108 00:22:13.407900 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.408006 kubelet[2509]: W1108 00:22:13.407942 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.408006 kubelet[2509]: E1108 00:22:13.407966 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.408715 kubelet[2509]: I1108 00:22:13.408165 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/daa9f29d-2835-4e9f-8181-7aeaf654817a-registration-dir\") pod \"csi-node-driver-7q5q2\" (UID: \"daa9f29d-2835-4e9f-8181-7aeaf654817a\") " pod="calico-system/csi-node-driver-7q5q2" Nov 8 00:22:13.409394 kubelet[2509]: E1108 00:22:13.409171 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.409394 kubelet[2509]: W1108 00:22:13.409246 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.409394 kubelet[2509]: E1108 00:22:13.409276 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.410262 kubelet[2509]: E1108 00:22:13.410022 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.410262 kubelet[2509]: W1108 00:22:13.410050 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.410262 kubelet[2509]: E1108 00:22:13.410083 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.411176 kubelet[2509]: E1108 00:22:13.410944 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.411176 kubelet[2509]: W1108 00:22:13.410985 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.411176 kubelet[2509]: E1108 00:22:13.411008 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.411176 kubelet[2509]: I1108 00:22:13.411072 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/daa9f29d-2835-4e9f-8181-7aeaf654817a-socket-dir\") pod \"csi-node-driver-7q5q2\" (UID: \"daa9f29d-2835-4e9f-8181-7aeaf654817a\") " pod="calico-system/csi-node-driver-7q5q2" Nov 8 00:22:13.412067 kubelet[2509]: E1108 00:22:13.411903 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.412067 kubelet[2509]: W1108 00:22:13.411927 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.412067 kubelet[2509]: E1108 00:22:13.411955 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.412067 kubelet[2509]: I1108 00:22:13.411991 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/daa9f29d-2835-4e9f-8181-7aeaf654817a-varrun\") pod \"csi-node-driver-7q5q2\" (UID: \"daa9f29d-2835-4e9f-8181-7aeaf654817a\") " pod="calico-system/csi-node-driver-7q5q2" Nov 8 00:22:13.413443 kubelet[2509]: E1108 00:22:13.413404 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.413443 kubelet[2509]: W1108 00:22:13.413427 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.413443 kubelet[2509]: E1108 00:22:13.413452 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.413948 kubelet[2509]: E1108 00:22:13.413751 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.413948 kubelet[2509]: W1108 00:22:13.413762 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.413948 kubelet[2509]: E1108 00:22:13.413947 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.413948 kubelet[2509]: W1108 00:22:13.413954 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.414913 kubelet[2509]: E1108 00:22:13.414747 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.414913 kubelet[2509]: W1108 00:22:13.414764 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.414913 kubelet[2509]: E1108 00:22:13.414779 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.414913 kubelet[2509]: E1108 00:22:13.414793 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.415253 kubelet[2509]: E1108 00:22:13.415133 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.415253 kubelet[2509]: I1108 00:22:13.415221 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4l7x\" (UniqueName: \"kubernetes.io/projected/daa9f29d-2835-4e9f-8181-7aeaf654817a-kube-api-access-t4l7x\") pod \"csi-node-driver-7q5q2\" (UID: \"daa9f29d-2835-4e9f-8181-7aeaf654817a\") " pod="calico-system/csi-node-driver-7q5q2" Nov 8 00:22:13.416346 kubelet[2509]: E1108 00:22:13.416308 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.416346 kubelet[2509]: W1108 00:22:13.416336 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.416614 kubelet[2509]: E1108 00:22:13.416357 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.416614 kubelet[2509]: E1108 00:22:13.416585 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.416614 kubelet[2509]: W1108 00:22:13.416592 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.416614 kubelet[2509]: E1108 00:22:13.416608 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.416976 kubelet[2509]: E1108 00:22:13.416860 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.416976 kubelet[2509]: W1108 00:22:13.416872 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.416976 kubelet[2509]: E1108 00:22:13.416885 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.418369 kubelet[2509]: E1108 00:22:13.418332 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.418369 kubelet[2509]: W1108 00:22:13.418361 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.418600 kubelet[2509]: E1108 00:22:13.418386 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.418600 kubelet[2509]: I1108 00:22:13.418434 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/daa9f29d-2835-4e9f-8181-7aeaf654817a-kubelet-dir\") pod \"csi-node-driver-7q5q2\" (UID: \"daa9f29d-2835-4e9f-8181-7aeaf654817a\") " pod="calico-system/csi-node-driver-7q5q2" Nov 8 00:22:13.419003 kubelet[2509]: E1108 00:22:13.418897 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.419003 kubelet[2509]: W1108 00:22:13.418917 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.419003 kubelet[2509]: E1108 00:22:13.418938 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.420683 kubelet[2509]: E1108 00:22:13.420286 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.420683 kubelet[2509]: W1108 00:22:13.420485 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.420683 kubelet[2509]: E1108 00:22:13.420515 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.422841 containerd[1469]: time="2025-11-08T00:22:13.422769827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75866fc76d-rmhxq,Uid:72b72d69-c9ce-4e96-bb74-4e88deecbdaf,Namespace:calico-system,Attempt:0,} returns sandbox id \"20f226f0e17ff346e452c19c2d9df37e44ffac793e42a0c8e83ed60c38d632f4\"" Nov 8 00:22:13.437755 kubelet[2509]: E1108 00:22:13.436541 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:13.442158 kubelet[2509]: E1108 00:22:13.441687 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:13.442608 containerd[1469]: time="2025-11-08T00:22:13.442554154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8fz6q,Uid:ef256717-c0c0-4295-83b5-246fc22cf3d8,Namespace:calico-system,Attempt:0,}" Nov 8 00:22:13.450601 containerd[1469]: time="2025-11-08T00:22:13.450537872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:22:13.503814 containerd[1469]: time="2025-11-08T00:22:13.503262589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:22:13.503814 containerd[1469]: time="2025-11-08T00:22:13.503367770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:22:13.503814 containerd[1469]: time="2025-11-08T00:22:13.503387932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:13.503814 containerd[1469]: time="2025-11-08T00:22:13.503551106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:13.519456 kubelet[2509]: E1108 00:22:13.519387 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.519717 kubelet[2509]: W1108 00:22:13.519679 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.520074 kubelet[2509]: E1108 00:22:13.519834 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.521090 kubelet[2509]: E1108 00:22:13.520912 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.521090 kubelet[2509]: W1108 00:22:13.520944 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.521090 kubelet[2509]: E1108 00:22:13.521015 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.522765 kubelet[2509]: E1108 00:22:13.522511 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.522765 kubelet[2509]: W1108 00:22:13.522542 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.522765 kubelet[2509]: E1108 00:22:13.522579 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.523502 kubelet[2509]: E1108 00:22:13.523244 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.523502 kubelet[2509]: W1108 00:22:13.523271 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.523502 kubelet[2509]: E1108 00:22:13.523319 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.524461 kubelet[2509]: E1108 00:22:13.524247 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.524461 kubelet[2509]: W1108 00:22:13.524290 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.524664 kubelet[2509]: E1108 00:22:13.524640 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.525181 kubelet[2509]: E1108 00:22:13.525064 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.525181 kubelet[2509]: W1108 00:22:13.525087 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.525607 kubelet[2509]: E1108 00:22:13.525384 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.526743 kubelet[2509]: E1108 00:22:13.526508 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.526743 kubelet[2509]: W1108 00:22:13.526575 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.526743 kubelet[2509]: E1108 00:22:13.526697 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.528132 kubelet[2509]: E1108 00:22:13.527937 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.528132 kubelet[2509]: W1108 00:22:13.527962 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.528442 kubelet[2509]: E1108 00:22:13.528373 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.529230 kubelet[2509]: E1108 00:22:13.528993 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.529230 kubelet[2509]: W1108 00:22:13.529017 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.529527 kubelet[2509]: E1108 00:22:13.529386 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.529968 kubelet[2509]: E1108 00:22:13.529799 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.529968 kubelet[2509]: W1108 00:22:13.529821 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.530691 kubelet[2509]: E1108 00:22:13.530579 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.530995 kubelet[2509]: E1108 00:22:13.530846 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.530995 kubelet[2509]: W1108 00:22:13.530867 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.531464 kubelet[2509]: E1108 00:22:13.531173 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.531996 kubelet[2509]: E1108 00:22:13.531974 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.532249 kubelet[2509]: W1108 00:22:13.532110 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.532457 kubelet[2509]: E1108 00:22:13.532350 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.533159 kubelet[2509]: E1108 00:22:13.533137 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.533496 kubelet[2509]: W1108 00:22:13.533390 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.534066 kubelet[2509]: E1108 00:22:13.534019 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.534470 kubelet[2509]: E1108 00:22:13.534234 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.534470 kubelet[2509]: W1108 00:22:13.534250 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.534705 kubelet[2509]: E1108 00:22:13.534684 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.535363 kubelet[2509]: E1108 00:22:13.535146 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.535363 kubelet[2509]: W1108 00:22:13.535168 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.535632 kubelet[2509]: E1108 00:22:13.535551 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.535888 kubelet[2509]: E1108 00:22:13.535791 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.535888 kubelet[2509]: W1108 00:22:13.535809 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.537148 kubelet[2509]: E1108 00:22:13.536539 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.537148 kubelet[2509]: E1108 00:22:13.536639 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.537148 kubelet[2509]: W1108 00:22:13.536652 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.537148 kubelet[2509]: E1108 00:22:13.536962 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.541651 kubelet[2509]: E1108 00:22:13.541408 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.541651 kubelet[2509]: W1108 00:22:13.541447 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.543765 kubelet[2509]: E1108 00:22:13.543553 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.543765 kubelet[2509]: W1108 00:22:13.543592 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.544300 kubelet[2509]: E1108 00:22:13.543976 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.544300 kubelet[2509]: W1108 00:22:13.543994 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.544473 kubelet[2509]: E1108 00:22:13.544452 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.544565 kubelet[2509]: W1108 00:22:13.544548 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.544673 kubelet[2509]: E1108 00:22:13.544652 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.545301 kubelet[2509]: E1108 00:22:13.545187 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.545708 kubelet[2509]: W1108 00:22:13.545427 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.545708 kubelet[2509]: E1108 00:22:13.545455 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.545708 kubelet[2509]: E1108 00:22:13.545650 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.546984 kubelet[2509]: E1108 00:22:13.546901 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.546984 kubelet[2509]: E1108 00:22:13.546952 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.547531 kubelet[2509]: E1108 00:22:13.547421 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.547531 kubelet[2509]: W1108 00:22:13.547439 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.547531 kubelet[2509]: E1108 00:22:13.547467 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.547910 kubelet[2509]: E1108 00:22:13.547741 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.547910 kubelet[2509]: W1108 00:22:13.547759 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.547910 kubelet[2509]: E1108 00:22:13.547785 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.550258 kubelet[2509]: E1108 00:22:13.548287 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.550258 kubelet[2509]: W1108 00:22:13.548319 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.550258 kubelet[2509]: E1108 00:22:13.548333 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.548355 systemd[1]: Started cri-containerd-fc61791503e7ad19bdbb5a5111cc6b1b9397881b4bc8c708f77a2b99804d7680.scope - libcontainer container fc61791503e7ad19bdbb5a5111cc6b1b9397881b4bc8c708f77a2b99804d7680. Nov 8 00:22:13.571255 kubelet[2509]: E1108 00:22:13.571090 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:13.571255 kubelet[2509]: W1108 00:22:13.571133 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:13.571255 kubelet[2509]: E1108 00:22:13.571170 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:13.610864 containerd[1469]: time="2025-11-08T00:22:13.610804492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8fz6q,Uid:ef256717-c0c0-4295-83b5-246fc22cf3d8,Namespace:calico-system,Attempt:0,} returns sandbox id \"fc61791503e7ad19bdbb5a5111cc6b1b9397881b4bc8c708f77a2b99804d7680\"" Nov 8 00:22:13.612383 kubelet[2509]: E1108 00:22:13.612330 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:13.905260 systemd[1]: run-containerd-runc-k8s.io-20f226f0e17ff346e452c19c2d9df37e44ffac793e42a0c8e83ed60c38d632f4-runc.sIzwR8.mount: Deactivated successfully. Nov 8 00:22:14.941151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3189989013.mount: Deactivated successfully. Nov 8 00:22:15.416565 kubelet[2509]: E1108 00:22:15.416262 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7q5q2" podUID="daa9f29d-2835-4e9f-8181-7aeaf654817a" Nov 8 00:22:15.990294 containerd[1469]: time="2025-11-08T00:22:15.990215366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:15.992113 containerd[1469]: time="2025-11-08T00:22:15.992003135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:22:15.992954 containerd[1469]: time="2025-11-08T00:22:15.992908954Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:15.995621 containerd[1469]: time="2025-11-08T00:22:15.995174323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:15.996254 containerd[1469]: time="2025-11-08T00:22:15.996184549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.545577896s" Nov 8 00:22:15.996340 containerd[1469]: time="2025-11-08T00:22:15.996261662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:22:15.998485 containerd[1469]: time="2025-11-08T00:22:15.998156111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:22:16.028291 containerd[1469]: time="2025-11-08T00:22:16.028232601Z" level=info msg="CreateContainer within sandbox \"20f226f0e17ff346e452c19c2d9df37e44ffac793e42a0c8e83ed60c38d632f4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:22:16.044926 containerd[1469]: time="2025-11-08T00:22:16.044749743Z" level=info msg="CreateContainer within sandbox \"20f226f0e17ff346e452c19c2d9df37e44ffac793e42a0c8e83ed60c38d632f4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"92d45e65c583e0c5da2d713ed7fed2d4c7d0d46ed2a0010409e08cc4d0d9bb7b\"" Nov 8 00:22:16.045986 containerd[1469]: time="2025-11-08T00:22:16.045461169Z" level=info msg="StartContainer for \"92d45e65c583e0c5da2d713ed7fed2d4c7d0d46ed2a0010409e08cc4d0d9bb7b\"" Nov 8 00:22:16.198618 systemd[1]: Started cri-containerd-92d45e65c583e0c5da2d713ed7fed2d4c7d0d46ed2a0010409e08cc4d0d9bb7b.scope - libcontainer container 92d45e65c583e0c5da2d713ed7fed2d4c7d0d46ed2a0010409e08cc4d0d9bb7b. Nov 8 00:22:16.300101 containerd[1469]: time="2025-11-08T00:22:16.298495735Z" level=info msg="StartContainer for \"92d45e65c583e0c5da2d713ed7fed2d4c7d0d46ed2a0010409e08cc4d0d9bb7b\" returns successfully" Nov 8 00:22:16.590495 kubelet[2509]: E1108 00:22:16.589342 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:16.634473 kubelet[2509]: E1108 00:22:16.634388 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.634473 kubelet[2509]: W1108 00:22:16.634437 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.634473 kubelet[2509]: E1108 00:22:16.634472 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.636437 kubelet[2509]: E1108 00:22:16.636362 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.636437 kubelet[2509]: W1108 00:22:16.636410 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.636437 kubelet[2509]: E1108 00:22:16.636447 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.638573 kubelet[2509]: E1108 00:22:16.637975 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.638573 kubelet[2509]: W1108 00:22:16.638017 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.638573 kubelet[2509]: E1108 00:22:16.638055 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.639785 kubelet[2509]: E1108 00:22:16.639640 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.639785 kubelet[2509]: W1108 00:22:16.639725 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.639785 kubelet[2509]: E1108 00:22:16.639782 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.641793 kubelet[2509]: E1108 00:22:16.641691 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.641793 kubelet[2509]: W1108 00:22:16.641729 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.641793 kubelet[2509]: E1108 00:22:16.641766 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.644323 kubelet[2509]: E1108 00:22:16.644251 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.644323 kubelet[2509]: W1108 00:22:16.644304 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.644323 kubelet[2509]: E1108 00:22:16.644333 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.645386 kubelet[2509]: E1108 00:22:16.645176 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.645386 kubelet[2509]: W1108 00:22:16.645230 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.645386 kubelet[2509]: E1108 00:22:16.645259 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.646424 kubelet[2509]: E1108 00:22:16.646391 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.646424 kubelet[2509]: W1108 00:22:16.646418 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.646640 kubelet[2509]: E1108 00:22:16.646442 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.647644 kubelet[2509]: E1108 00:22:16.647508 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.647644 kubelet[2509]: W1108 00:22:16.647532 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.647644 kubelet[2509]: E1108 00:22:16.647557 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.648367 kubelet[2509]: E1108 00:22:16.648336 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.648367 kubelet[2509]: W1108 00:22:16.648357 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.648367 kubelet[2509]: E1108 00:22:16.648379 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.649334 kubelet[2509]: E1108 00:22:16.649301 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.649334 kubelet[2509]: W1108 00:22:16.649322 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.649334 kubelet[2509]: E1108 00:22:16.649342 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.650956 kubelet[2509]: E1108 00:22:16.650880 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.650956 kubelet[2509]: W1108 00:22:16.650940 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.650956 kubelet[2509]: E1108 00:22:16.650968 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.652437 kubelet[2509]: E1108 00:22:16.652378 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.652437 kubelet[2509]: W1108 00:22:16.652411 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.652667 kubelet[2509]: E1108 00:22:16.652446 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.653756 kubelet[2509]: E1108 00:22:16.653694 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.653756 kubelet[2509]: W1108 00:22:16.653721 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.653756 kubelet[2509]: E1108 00:22:16.653751 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.657547 kubelet[2509]: E1108 00:22:16.655703 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.657547 kubelet[2509]: W1108 00:22:16.655739 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.657547 kubelet[2509]: E1108 00:22:16.655774 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.663591 kubelet[2509]: E1108 00:22:16.662532 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.663591 kubelet[2509]: W1108 00:22:16.662568 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.663591 kubelet[2509]: E1108 00:22:16.662601 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.663591 kubelet[2509]: E1108 00:22:16.662925 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.663591 kubelet[2509]: W1108 00:22:16.662934 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.663591 kubelet[2509]: E1108 00:22:16.662946 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.663591 kubelet[2509]: E1108 00:22:16.663175 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.663591 kubelet[2509]: W1108 00:22:16.663185 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.663591 kubelet[2509]: E1108 00:22:16.663214 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.664486 kubelet[2509]: E1108 00:22:16.663807 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.664486 kubelet[2509]: W1108 00:22:16.663824 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.664486 kubelet[2509]: E1108 00:22:16.663844 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.665272 kubelet[2509]: E1108 00:22:16.664976 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.665272 kubelet[2509]: W1108 00:22:16.665003 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.665272 kubelet[2509]: E1108 00:22:16.665028 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.667352 kubelet[2509]: E1108 00:22:16.666622 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.667582 kubelet[2509]: W1108 00:22:16.667550 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.667698 kubelet[2509]: E1108 00:22:16.667667 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.670523 kubelet[2509]: E1108 00:22:16.670354 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.671093 kubelet[2509]: W1108 00:22:16.670835 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.671093 kubelet[2509]: E1108 00:22:16.670960 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.672247 kubelet[2509]: E1108 00:22:16.671596 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.672247 kubelet[2509]: W1108 00:22:16.671622 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.672247 kubelet[2509]: E1108 00:22:16.671763 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.673559 kubelet[2509]: E1108 00:22:16.673339 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.673559 kubelet[2509]: W1108 00:22:16.673366 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.673559 kubelet[2509]: E1108 00:22:16.673398 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.676292 kubelet[2509]: E1108 00:22:16.674322 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.676292 kubelet[2509]: W1108 00:22:16.674346 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.676292 kubelet[2509]: E1108 00:22:16.674429 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.677228 kubelet[2509]: E1108 00:22:16.676927 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.677228 kubelet[2509]: W1108 00:22:16.677009 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.677228 kubelet[2509]: E1108 00:22:16.677084 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.678137 kubelet[2509]: E1108 00:22:16.677863 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.678137 kubelet[2509]: W1108 00:22:16.677889 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.678137 kubelet[2509]: E1108 00:22:16.677954 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.680389 kubelet[2509]: E1108 00:22:16.680119 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.680389 kubelet[2509]: W1108 00:22:16.680155 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.680389 kubelet[2509]: E1108 00:22:16.680189 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.680799 kubelet[2509]: E1108 00:22:16.680783 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.681000 kubelet[2509]: W1108 00:22:16.680874 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.681000 kubelet[2509]: E1108 00:22:16.680935 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.681443 kubelet[2509]: E1108 00:22:16.681283 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.681443 kubelet[2509]: W1108 00:22:16.681298 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.681443 kubelet[2509]: E1108 00:22:16.681315 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.682353 kubelet[2509]: E1108 00:22:16.682323 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.682787 kubelet[2509]: W1108 00:22:16.682503 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.682787 kubelet[2509]: E1108 00:22:16.682546 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.684984 kubelet[2509]: E1108 00:22:16.684937 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.684984 kubelet[2509]: W1108 00:22:16.684976 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.685288 kubelet[2509]: E1108 00:22:16.685021 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:16.685634 kubelet[2509]: E1108 00:22:16.685601 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:22:16.685690 kubelet[2509]: W1108 00:22:16.685633 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:22:16.685690 kubelet[2509]: E1108 00:22:16.685660 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:22:17.362845 containerd[1469]: time="2025-11-08T00:22:17.362761712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:17.365367 containerd[1469]: time="2025-11-08T00:22:17.365276442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:22:17.366776 kubelet[2509]: E1108 00:22:17.366383 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7q5q2" podUID="daa9f29d-2835-4e9f-8181-7aeaf654817a" Nov 8 00:22:17.367427 containerd[1469]: time="2025-11-08T00:22:17.367094945Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:17.387601 containerd[1469]: time="2025-11-08T00:22:17.386397315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:17.387601 containerd[1469]: time="2025-11-08T00:22:17.387351744Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.389145378s" Nov 8 00:22:17.387601 containerd[1469]: time="2025-11-08T00:22:17.387414246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:22:17.392075 containerd[1469]: time="2025-11-08T00:22:17.392010871Z" level=info msg="CreateContainer within sandbox \"fc61791503e7ad19bdbb5a5111cc6b1b9397881b4bc8c708f77a2b99804d7680\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:22:17.424209 containerd[1469]: time="2025-11-08T00:22:17.424067693Z" level=info msg="CreateContainer within sandbox \"fc61791503e7ad19bdbb5a5111cc6b1b9397881b4bc8c708f77a2b99804d7680\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e7a696ca6e4a9f3fc6949a86de62d0d2389c70b60912a3900369e0b88a677c70\"" Nov 8 00:22:17.425515 containerd[1469]: time="2025-11-08T00:22:17.425283572Z" level=info msg="StartContainer for \"e7a696ca6e4a9f3fc6949a86de62d0d2389c70b60912a3900369e0b88a677c70\"" Nov 8 00:22:17.491576 systemd[1]: Started cri-containerd-e7a696ca6e4a9f3fc6949a86de62d0d2389c70b60912a3900369e0b88a677c70.scope - libcontainer container e7a696ca6e4a9f3fc6949a86de62d0d2389c70b60912a3900369e0b88a677c70. Nov 8 00:22:17.543544 containerd[1469]: time="2025-11-08T00:22:17.543464385Z" level=info msg="StartContainer for \"e7a696ca6e4a9f3fc6949a86de62d0d2389c70b60912a3900369e0b88a677c70\" returns successfully" Nov 8 00:22:17.573166 systemd[1]: cri-containerd-e7a696ca6e4a9f3fc6949a86de62d0d2389c70b60912a3900369e0b88a677c70.scope: Deactivated successfully. Nov 8 00:22:17.594826 kubelet[2509]: I1108 00:22:17.594494 2509 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:22:17.596605 kubelet[2509]: E1108 00:22:17.596082 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:17.598094 kubelet[2509]: E1108 00:22:17.597175 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:17.630778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7a696ca6e4a9f3fc6949a86de62d0d2389c70b60912a3900369e0b88a677c70-rootfs.mount: Deactivated successfully. Nov 8 00:22:17.647752 kubelet[2509]: I1108 00:22:17.646763 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75866fc76d-rmhxq" podStartSLOduration=3.091829723 podStartE2EDuration="5.640908648s" podCreationTimestamp="2025-11-08 00:22:12 +0000 UTC" firstStartedPulling="2025-11-08 00:22:13.448528832 +0000 UTC m=+29.244629086" lastFinishedPulling="2025-11-08 00:22:15.997607754 +0000 UTC m=+31.793708011" observedRunningTime="2025-11-08 00:22:16.758483225 +0000 UTC m=+32.554583502" watchObservedRunningTime="2025-11-08 00:22:17.640908648 +0000 UTC m=+33.437008914" Nov 8 00:22:17.707391 containerd[1469]: time="2025-11-08T00:22:17.634555338Z" level=info msg="shim disconnected" id=e7a696ca6e4a9f3fc6949a86de62d0d2389c70b60912a3900369e0b88a677c70 namespace=k8s.io Nov 8 00:22:17.707669 containerd[1469]: time="2025-11-08T00:22:17.707633241Z" level=warning msg="cleaning up after shim disconnected" id=e7a696ca6e4a9f3fc6949a86de62d0d2389c70b60912a3900369e0b88a677c70 namespace=k8s.io Nov 8 00:22:17.707760 containerd[1469]: time="2025-11-08T00:22:17.707742464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:22:18.600479 kubelet[2509]: E1108 00:22:18.600416 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:18.602583 containerd[1469]: time="2025-11-08T00:22:18.602521847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:22:19.363690 kubelet[2509]: E1108 00:22:19.363604 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7q5q2" podUID="daa9f29d-2835-4e9f-8181-7aeaf654817a" Nov 8 00:22:21.364044 kubelet[2509]: E1108 00:22:21.363987 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7q5q2" podUID="daa9f29d-2835-4e9f-8181-7aeaf654817a" Nov 8 00:22:23.076418 containerd[1469]: time="2025-11-08T00:22:23.075182657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:23.076418 containerd[1469]: time="2025-11-08T00:22:23.076345797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:22:23.077140 containerd[1469]: time="2025-11-08T00:22:23.077023020Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:23.080796 containerd[1469]: time="2025-11-08T00:22:23.080719943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:23.084412 containerd[1469]: time="2025-11-08T00:22:23.084307680Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.481708954s" Nov 8 00:22:23.084697 containerd[1469]: time="2025-11-08T00:22:23.084668385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:22:23.094308 containerd[1469]: time="2025-11-08T00:22:23.094192981Z" level=info msg="CreateContainer within sandbox \"fc61791503e7ad19bdbb5a5111cc6b1b9397881b4bc8c708f77a2b99804d7680\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:22:23.127255 containerd[1469]: time="2025-11-08T00:22:23.126645127Z" level=info msg="CreateContainer within sandbox \"fc61791503e7ad19bdbb5a5111cc6b1b9397881b4bc8c708f77a2b99804d7680\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b451cb093402eba1e6ced19ee3d0970e7313d4ec4842ca73d6cd648643905d12\"" Nov 8 00:22:23.128420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1805335697.mount: Deactivated successfully. Nov 8 00:22:23.133498 containerd[1469]: time="2025-11-08T00:22:23.131092432Z" level=info msg="StartContainer for \"b451cb093402eba1e6ced19ee3d0970e7313d4ec4842ca73d6cd648643905d12\"" Nov 8 00:22:23.218554 systemd[1]: Started cri-containerd-b451cb093402eba1e6ced19ee3d0970e7313d4ec4842ca73d6cd648643905d12.scope - libcontainer container b451cb093402eba1e6ced19ee3d0970e7313d4ec4842ca73d6cd648643905d12. Nov 8 00:22:23.290271 containerd[1469]: time="2025-11-08T00:22:23.289218299Z" level=info msg="StartContainer for \"b451cb093402eba1e6ced19ee3d0970e7313d4ec4842ca73d6cd648643905d12\" returns successfully" Nov 8 00:22:23.364455 kubelet[2509]: E1108 00:22:23.364231 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7q5q2" podUID="daa9f29d-2835-4e9f-8181-7aeaf654817a" Nov 8 00:22:23.629831 kubelet[2509]: E1108 00:22:23.628955 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:24.116322 systemd[1]: cri-containerd-b451cb093402eba1e6ced19ee3d0970e7313d4ec4842ca73d6cd648643905d12.scope: Deactivated successfully. Nov 8 00:22:24.170738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b451cb093402eba1e6ced19ee3d0970e7313d4ec4842ca73d6cd648643905d12-rootfs.mount: Deactivated successfully. Nov 8 00:22:24.179301 containerd[1469]: time="2025-11-08T00:22:24.179176761Z" level=info msg="shim disconnected" id=b451cb093402eba1e6ced19ee3d0970e7313d4ec4842ca73d6cd648643905d12 namespace=k8s.io Nov 8 00:22:24.179301 containerd[1469]: time="2025-11-08T00:22:24.179298597Z" level=warning msg="cleaning up after shim disconnected" id=b451cb093402eba1e6ced19ee3d0970e7313d4ec4842ca73d6cd648643905d12 namespace=k8s.io Nov 8 00:22:24.179301 containerd[1469]: time="2025-11-08T00:22:24.179315663Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:22:24.231392 kubelet[2509]: I1108 00:22:24.231338 2509 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:22:24.302176 systemd[1]: Created slice kubepods-burstable-podf6d4dc87_9d2e_4afc_ab03_361e2e8d6f52.slice - libcontainer container kubepods-burstable-podf6d4dc87_9d2e_4afc_ab03_361e2e8d6f52.slice. Nov 8 00:22:24.331443 kubelet[2509]: I1108 00:22:24.330444 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/31b491a3-55cf-4c4e-922c-621192b0de8f-calico-apiserver-certs\") pod \"calico-apiserver-58dd75b54-s7bcs\" (UID: \"31b491a3-55cf-4c4e-922c-621192b0de8f\") " pod="calico-apiserver/calico-apiserver-58dd75b54-s7bcs" Nov 8 00:22:24.331443 kubelet[2509]: I1108 00:22:24.330675 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6d4dc87-9d2e-4afc-ab03-361e2e8d6f52-config-volume\") pod \"coredns-668d6bf9bc-rs7hj\" (UID: \"f6d4dc87-9d2e-4afc-ab03-361e2e8d6f52\") " pod="kube-system/coredns-668d6bf9bc-rs7hj" Nov 8 00:22:24.331443 kubelet[2509]: I1108 00:22:24.330700 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zplnq\" (UniqueName: \"kubernetes.io/projected/f6d4dc87-9d2e-4afc-ab03-361e2e8d6f52-kube-api-access-zplnq\") pod \"coredns-668d6bf9bc-rs7hj\" (UID: \"f6d4dc87-9d2e-4afc-ab03-361e2e8d6f52\") " pod="kube-system/coredns-668d6bf9bc-rs7hj" Nov 8 00:22:24.331443 kubelet[2509]: I1108 00:22:24.330722 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc490ed-6d34-41fd-bb44-ba621857b51e-config\") pod \"goldmane-666569f655-7jxzl\" (UID: \"3fc490ed-6d34-41fd-bb44-ba621857b51e\") " pod="calico-system/goldmane-666569f655-7jxzl" Nov 8 00:22:24.331443 kubelet[2509]: I1108 00:22:24.330739 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/560dd5bf-b92f-472c-9028-b374dabf58bb-tigera-ca-bundle\") pod \"calico-kube-controllers-c57565dbd-rlbqk\" (UID: \"560dd5bf-b92f-472c-9028-b374dabf58bb\") " pod="calico-system/calico-kube-controllers-c57565dbd-rlbqk" Nov 8 00:22:24.331747 kubelet[2509]: I1108 00:22:24.330763 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp8zh\" (UniqueName: \"kubernetes.io/projected/31b491a3-55cf-4c4e-922c-621192b0de8f-kube-api-access-jp8zh\") pod \"calico-apiserver-58dd75b54-s7bcs\" (UID: \"31b491a3-55cf-4c4e-922c-621192b0de8f\") " pod="calico-apiserver/calico-apiserver-58dd75b54-s7bcs" Nov 8 00:22:24.331747 kubelet[2509]: I1108 00:22:24.330786 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3fc490ed-6d34-41fd-bb44-ba621857b51e-goldmane-key-pair\") pod \"goldmane-666569f655-7jxzl\" (UID: \"3fc490ed-6d34-41fd-bb44-ba621857b51e\") " pod="calico-system/goldmane-666569f655-7jxzl" Nov 8 00:22:24.331747 kubelet[2509]: I1108 00:22:24.330808 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/52c7cb07-da27-4536-ac57-e79e518b03ff-whisker-backend-key-pair\") pod \"whisker-8dcf856dd-sg2q5\" (UID: \"52c7cb07-da27-4536-ac57-e79e518b03ff\") " pod="calico-system/whisker-8dcf856dd-sg2q5" Nov 8 00:22:24.331747 kubelet[2509]: I1108 00:22:24.330826 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2ncw\" (UniqueName: \"kubernetes.io/projected/42409f77-f298-4938-9e62-f71427e3d95e-kube-api-access-c2ncw\") pod \"calico-apiserver-58dd75b54-57vp2\" (UID: \"42409f77-f298-4938-9e62-f71427e3d95e\") " pod="calico-apiserver/calico-apiserver-58dd75b54-57vp2" Nov 8 00:22:24.331747 kubelet[2509]: I1108 00:22:24.330848 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvg6j\" (UniqueName: \"kubernetes.io/projected/89a1f27b-cb85-45f6-a4b2-8e67e3f028ce-kube-api-access-kvg6j\") pod \"coredns-668d6bf9bc-jjkjr\" (UID: \"89a1f27b-cb85-45f6-a4b2-8e67e3f028ce\") " pod="kube-system/coredns-668d6bf9bc-jjkjr" Nov 8 00:22:24.333720 kubelet[2509]: I1108 00:22:24.330869 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48n5m\" (UniqueName: \"kubernetes.io/projected/560dd5bf-b92f-472c-9028-b374dabf58bb-kube-api-access-48n5m\") pod \"calico-kube-controllers-c57565dbd-rlbqk\" (UID: \"560dd5bf-b92f-472c-9028-b374dabf58bb\") " pod="calico-system/calico-kube-controllers-c57565dbd-rlbqk" Nov 8 00:22:24.333720 kubelet[2509]: I1108 00:22:24.330888 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxrcw\" (UniqueName: \"kubernetes.io/projected/52c7cb07-da27-4536-ac57-e79e518b03ff-kube-api-access-qxrcw\") pod \"whisker-8dcf856dd-sg2q5\" (UID: \"52c7cb07-da27-4536-ac57-e79e518b03ff\") " pod="calico-system/whisker-8dcf856dd-sg2q5" Nov 8 00:22:24.333720 kubelet[2509]: I1108 00:22:24.330905 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/42409f77-f298-4938-9e62-f71427e3d95e-calico-apiserver-certs\") pod \"calico-apiserver-58dd75b54-57vp2\" (UID: \"42409f77-f298-4938-9e62-f71427e3d95e\") " pod="calico-apiserver/calico-apiserver-58dd75b54-57vp2" Nov 8 00:22:24.333720 kubelet[2509]: I1108 00:22:24.330923 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89a1f27b-cb85-45f6-a4b2-8e67e3f028ce-config-volume\") pod \"coredns-668d6bf9bc-jjkjr\" (UID: \"89a1f27b-cb85-45f6-a4b2-8e67e3f028ce\") " pod="kube-system/coredns-668d6bf9bc-jjkjr" Nov 8 00:22:24.333720 kubelet[2509]: I1108 00:22:24.330943 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fc490ed-6d34-41fd-bb44-ba621857b51e-goldmane-ca-bundle\") pod \"goldmane-666569f655-7jxzl\" (UID: \"3fc490ed-6d34-41fd-bb44-ba621857b51e\") " pod="calico-system/goldmane-666569f655-7jxzl" Nov 8 00:22:24.332310 systemd[1]: Created slice kubepods-burstable-pod89a1f27b_cb85_45f6_a4b2_8e67e3f028ce.slice - libcontainer container kubepods-burstable-pod89a1f27b_cb85_45f6_a4b2_8e67e3f028ce.slice. Nov 8 00:22:24.333954 kubelet[2509]: I1108 00:22:24.330990 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk5pt\" (UniqueName: \"kubernetes.io/projected/3fc490ed-6d34-41fd-bb44-ba621857b51e-kube-api-access-jk5pt\") pod \"goldmane-666569f655-7jxzl\" (UID: \"3fc490ed-6d34-41fd-bb44-ba621857b51e\") " pod="calico-system/goldmane-666569f655-7jxzl" Nov 8 00:22:24.333954 kubelet[2509]: I1108 00:22:24.331012 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52c7cb07-da27-4536-ac57-e79e518b03ff-whisker-ca-bundle\") pod \"whisker-8dcf856dd-sg2q5\" (UID: \"52c7cb07-da27-4536-ac57-e79e518b03ff\") " pod="calico-system/whisker-8dcf856dd-sg2q5" Nov 8 00:22:24.349537 systemd[1]: Created slice kubepods-besteffort-pod52c7cb07_da27_4536_ac57_e79e518b03ff.slice - libcontainer container kubepods-besteffort-pod52c7cb07_da27_4536_ac57_e79e518b03ff.slice. Nov 8 00:22:24.371396 systemd[1]: Created slice kubepods-besteffort-pod42409f77_f298_4938_9e62_f71427e3d95e.slice - libcontainer container kubepods-besteffort-pod42409f77_f298_4938_9e62_f71427e3d95e.slice. Nov 8 00:22:24.384542 systemd[1]: Created slice kubepods-besteffort-pod560dd5bf_b92f_472c_9028_b374dabf58bb.slice - libcontainer container kubepods-besteffort-pod560dd5bf_b92f_472c_9028_b374dabf58bb.slice. Nov 8 00:22:24.398958 systemd[1]: Created slice kubepods-besteffort-pod31b491a3_55cf_4c4e_922c_621192b0de8f.slice - libcontainer container kubepods-besteffort-pod31b491a3_55cf_4c4e_922c_621192b0de8f.slice. Nov 8 00:22:24.411018 systemd[1]: Created slice kubepods-besteffort-pod3fc490ed_6d34_41fd_bb44_ba621857b51e.slice - libcontainer container kubepods-besteffort-pod3fc490ed_6d34_41fd_bb44_ba621857b51e.slice. Nov 8 00:22:24.619480 kubelet[2509]: E1108 00:22:24.619424 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:24.624053 containerd[1469]: time="2025-11-08T00:22:24.623439176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rs7hj,Uid:f6d4dc87-9d2e-4afc-ab03-361e2e8d6f52,Namespace:kube-system,Attempt:0,}" Nov 8 00:22:24.637769 kubelet[2509]: E1108 00:22:24.635437 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:24.640882 containerd[1469]: time="2025-11-08T00:22:24.640821226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:22:24.641535 kubelet[2509]: E1108 00:22:24.641475 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:24.649825 containerd[1469]: time="2025-11-08T00:22:24.649733023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jjkjr,Uid:89a1f27b-cb85-45f6-a4b2-8e67e3f028ce,Namespace:kube-system,Attempt:0,}" Nov 8 00:22:24.662314 containerd[1469]: time="2025-11-08T00:22:24.662186827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8dcf856dd-sg2q5,Uid:52c7cb07-da27-4536-ac57-e79e518b03ff,Namespace:calico-system,Attempt:0,}" Nov 8 00:22:24.682043 containerd[1469]: time="2025-11-08T00:22:24.681985494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58dd75b54-57vp2,Uid:42409f77-f298-4938-9e62-f71427e3d95e,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:22:24.706503 containerd[1469]: time="2025-11-08T00:22:24.706267822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c57565dbd-rlbqk,Uid:560dd5bf-b92f-472c-9028-b374dabf58bb,Namespace:calico-system,Attempt:0,}" Nov 8 00:22:24.717882 containerd[1469]: time="2025-11-08T00:22:24.717826397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58dd75b54-s7bcs,Uid:31b491a3-55cf-4c4e-922c-621192b0de8f,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:22:24.723769 containerd[1469]: time="2025-11-08T00:22:24.723234128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7jxzl,Uid:3fc490ed-6d34-41fd-bb44-ba621857b51e,Namespace:calico-system,Attempt:0,}" Nov 8 00:22:25.150071 containerd[1469]: time="2025-11-08T00:22:25.149683559Z" level=error msg="Failed to destroy network for sandbox \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.151367 containerd[1469]: time="2025-11-08T00:22:25.151312200Z" level=error msg="Failed to destroy network for sandbox \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.158935 containerd[1469]: time="2025-11-08T00:22:25.158842270Z" level=error msg="encountered an error cleaning up failed sandbox \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.164238 containerd[1469]: time="2025-11-08T00:22:25.163651191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8dcf856dd-sg2q5,Uid:52c7cb07-da27-4536-ac57-e79e518b03ff,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.164238 containerd[1469]: time="2025-11-08T00:22:25.164106138Z" level=error msg="Failed to destroy network for sandbox \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.168298 containerd[1469]: time="2025-11-08T00:22:25.158850276Z" level=error msg="encountered an error cleaning up failed sandbox \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.168298 containerd[1469]: time="2025-11-08T00:22:25.166776474Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jjkjr,Uid:89a1f27b-cb85-45f6-a4b2-8e67e3f028ce,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.182240 containerd[1469]: time="2025-11-08T00:22:25.182069340Z" level=error msg="encountered an error cleaning up failed sandbox \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.182240 containerd[1469]: time="2025-11-08T00:22:25.182191037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rs7hj,Uid:f6d4dc87-9d2e-4afc-ab03-361e2e8d6f52,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.188416 kubelet[2509]: E1108 00:22:25.186875 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.188416 kubelet[2509]: E1108 00:22:25.187039 2509 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rs7hj" Nov 8 00:22:25.188416 kubelet[2509]: E1108 00:22:25.187072 2509 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rs7hj" Nov 8 00:22:25.188416 kubelet[2509]: E1108 00:22:25.187158 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.188735 kubelet[2509]: E1108 00:22:25.187325 2509 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8dcf856dd-sg2q5" Nov 8 00:22:25.188735 kubelet[2509]: E1108 00:22:25.187367 2509 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8dcf856dd-sg2q5" Nov 8 00:22:25.188735 kubelet[2509]: E1108 00:22:25.187426 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8dcf856dd-sg2q5_calico-system(52c7cb07-da27-4536-ac57-e79e518b03ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8dcf856dd-sg2q5_calico-system(52c7cb07-da27-4536-ac57-e79e518b03ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8dcf856dd-sg2q5" podUID="52c7cb07-da27-4536-ac57-e79e518b03ff" Nov 8 00:22:25.188922 kubelet[2509]: E1108 00:22:25.187168 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rs7hj_kube-system(f6d4dc87-9d2e-4afc-ab03-361e2e8d6f52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rs7hj_kube-system(f6d4dc87-9d2e-4afc-ab03-361e2e8d6f52)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rs7hj" podUID="f6d4dc87-9d2e-4afc-ab03-361e2e8d6f52" Nov 8 00:22:25.188922 kubelet[2509]: E1108 00:22:25.187552 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.188922 kubelet[2509]: E1108 00:22:25.187603 2509 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jjkjr" Nov 8 00:22:25.189098 kubelet[2509]: E1108 00:22:25.187633 2509 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jjkjr" Nov 8 00:22:25.189098 kubelet[2509]: E1108 00:22:25.188287 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jjkjr_kube-system(89a1f27b-cb85-45f6-a4b2-8e67e3f028ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jjkjr_kube-system(89a1f27b-cb85-45f6-a4b2-8e67e3f028ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jjkjr" podUID="89a1f27b-cb85-45f6-a4b2-8e67e3f028ce" Nov 8 00:22:25.277078 containerd[1469]: time="2025-11-08T00:22:25.276014159Z" level=error msg="Failed to destroy network for sandbox \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.278619 containerd[1469]: time="2025-11-08T00:22:25.278481044Z" level=error msg="encountered an error cleaning up failed sandbox \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.280976 containerd[1469]: time="2025-11-08T00:22:25.280273061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58dd75b54-s7bcs,Uid:31b491a3-55cf-4c4e-922c-621192b0de8f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.283361 kubelet[2509]: E1108 00:22:25.282035 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.283361 kubelet[2509]: E1108 00:22:25.282109 2509 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58dd75b54-s7bcs" Nov 8 00:22:25.283361 kubelet[2509]: E1108 00:22:25.282131 2509 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58dd75b54-s7bcs" Nov 8 00:22:25.283696 kubelet[2509]: E1108 00:22:25.282183 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58dd75b54-s7bcs_calico-apiserver(31b491a3-55cf-4c4e-922c-621192b0de8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58dd75b54-s7bcs_calico-apiserver(31b491a3-55cf-4c4e-922c-621192b0de8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58dd75b54-s7bcs" podUID="31b491a3-55cf-4c4e-922c-621192b0de8f" Nov 8 00:22:25.287378 containerd[1469]: time="2025-11-08T00:22:25.286704315Z" level=error msg="Failed to destroy network for sandbox \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.287016 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5-shm.mount: Deactivated successfully. Nov 8 00:22:25.290953 containerd[1469]: time="2025-11-08T00:22:25.290619474Z" level=error msg="encountered an error cleaning up failed sandbox \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.290953 containerd[1469]: time="2025-11-08T00:22:25.290722816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c57565dbd-rlbqk,Uid:560dd5bf-b92f-472c-9028-b374dabf58bb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.292829 kubelet[2509]: E1108 00:22:25.291407 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.292829 kubelet[2509]: E1108 00:22:25.291490 2509 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c57565dbd-rlbqk" Nov 8 00:22:25.292829 kubelet[2509]: E1108 00:22:25.291520 2509 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c57565dbd-rlbqk" Nov 8 00:22:25.293036 kubelet[2509]: E1108 00:22:25.291593 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c57565dbd-rlbqk_calico-system(560dd5bf-b92f-472c-9028-b374dabf58bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c57565dbd-rlbqk_calico-system(560dd5bf-b92f-472c-9028-b374dabf58bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c57565dbd-rlbqk" podUID="560dd5bf-b92f-472c-9028-b374dabf58bb" Nov 8 00:22:25.299753 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7-shm.mount: Deactivated successfully. Nov 8 00:22:25.305565 containerd[1469]: time="2025-11-08T00:22:25.305394248Z" level=error msg="Failed to destroy network for sandbox \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.306109 containerd[1469]: time="2025-11-08T00:22:25.306066698Z" level=error msg="encountered an error cleaning up failed sandbox \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.306537 containerd[1469]: time="2025-11-08T00:22:25.306352577Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58dd75b54-57vp2,Uid:42409f77-f298-4938-9e62-f71427e3d95e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.309464 kubelet[2509]: E1108 00:22:25.309056 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.309464 kubelet[2509]: E1108 00:22:25.309152 2509 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58dd75b54-57vp2" Nov 8 00:22:25.309464 kubelet[2509]: E1108 00:22:25.309180 2509 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58dd75b54-57vp2" Nov 8 00:22:25.310519 kubelet[2509]: E1108 00:22:25.309745 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58dd75b54-57vp2_calico-apiserver(42409f77-f298-4938-9e62-f71427e3d95e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58dd75b54-57vp2_calico-apiserver(42409f77-f298-4938-9e62-f71427e3d95e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58dd75b54-57vp2" podUID="42409f77-f298-4938-9e62-f71427e3d95e" Nov 8 00:22:25.312834 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9-shm.mount: Deactivated successfully. Nov 8 00:22:25.331246 containerd[1469]: time="2025-11-08T00:22:25.330995203Z" level=error msg="Failed to destroy network for sandbox \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.332010 containerd[1469]: time="2025-11-08T00:22:25.331868337Z" level=error msg="encountered an error cleaning up failed sandbox \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.332010 containerd[1469]: time="2025-11-08T00:22:25.331950162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7jxzl,Uid:3fc490ed-6d34-41fd-bb44-ba621857b51e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.332858 kubelet[2509]: E1108 00:22:25.332439 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.332858 kubelet[2509]: E1108 00:22:25.332512 2509 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-7jxzl" Nov 8 00:22:25.332858 kubelet[2509]: E1108 00:22:25.332534 2509 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-7jxzl" Nov 8 00:22:25.333615 kubelet[2509]: E1108 00:22:25.332576 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-7jxzl_calico-system(3fc490ed-6d34-41fd-bb44-ba621857b51e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-7jxzl_calico-system(3fc490ed-6d34-41fd-bb44-ba621857b51e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-7jxzl" podUID="3fc490ed-6d34-41fd-bb44-ba621857b51e" Nov 8 00:22:25.336362 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21-shm.mount: Deactivated successfully. Nov 8 00:22:25.371556 systemd[1]: Created slice kubepods-besteffort-poddaa9f29d_2835_4e9f_8181_7aeaf654817a.slice - libcontainer container kubepods-besteffort-poddaa9f29d_2835_4e9f_8181_7aeaf654817a.slice. Nov 8 00:22:25.378259 containerd[1469]: time="2025-11-08T00:22:25.375624268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7q5q2,Uid:daa9f29d-2835-4e9f-8181-7aeaf654817a,Namespace:calico-system,Attempt:0,}" Nov 8 00:22:25.472373 containerd[1469]: time="2025-11-08T00:22:25.471945759Z" level=error msg="Failed to destroy network for sandbox \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.472772 containerd[1469]: time="2025-11-08T00:22:25.472707118Z" level=error msg="encountered an error cleaning up failed sandbox \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.472873 containerd[1469]: time="2025-11-08T00:22:25.472792428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7q5q2,Uid:daa9f29d-2835-4e9f-8181-7aeaf654817a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.473953 kubelet[2509]: E1108 00:22:25.473718 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.474386 kubelet[2509]: E1108 00:22:25.474091 2509 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7q5q2" Nov 8 00:22:25.476470 kubelet[2509]: E1108 00:22:25.474152 2509 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7q5q2" Nov 8 00:22:25.476470 kubelet[2509]: E1108 00:22:25.474767 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7q5q2_calico-system(daa9f29d-2835-4e9f-8181-7aeaf654817a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7q5q2_calico-system(daa9f29d-2835-4e9f-8181-7aeaf654817a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7q5q2" podUID="daa9f29d-2835-4e9f-8181-7aeaf654817a" Nov 8 00:22:25.641899 kubelet[2509]: I1108 00:22:25.639845 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Nov 8 00:22:25.652297 kubelet[2509]: I1108 00:22:25.649498 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Nov 8 00:22:25.652567 containerd[1469]: time="2025-11-08T00:22:25.651735783Z" level=info msg="StopPodSandbox for \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\"" Nov 8 00:22:25.652567 containerd[1469]: time="2025-11-08T00:22:25.652104041Z" level=info msg="StopPodSandbox for \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\"" Nov 8 00:22:25.655751 containerd[1469]: time="2025-11-08T00:22:25.655440726Z" level=info msg="Ensure that sandbox 41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21 in task-service has been cleanup successfully" Nov 8 00:22:25.656509 containerd[1469]: time="2025-11-08T00:22:25.656018367Z" level=info msg="Ensure that sandbox 82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9 in task-service has been cleanup successfully" Nov 8 00:22:25.665024 kubelet[2509]: I1108 00:22:25.664808 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Nov 8 00:22:25.669173 containerd[1469]: time="2025-11-08T00:22:25.668823891Z" level=info msg="StopPodSandbox for \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\"" Nov 8 00:22:25.669404 containerd[1469]: time="2025-11-08T00:22:25.669315026Z" level=info msg="Ensure that sandbox 827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5 in task-service has been cleanup successfully" Nov 8 00:22:25.686984 kubelet[2509]: I1108 00:22:25.684369 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Nov 8 00:22:25.694472 containerd[1469]: time="2025-11-08T00:22:25.694394289Z" level=info msg="StopPodSandbox for \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\"" Nov 8 00:22:25.697729 containerd[1469]: time="2025-11-08T00:22:25.697663772Z" level=info msg="Ensure that sandbox 65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6 in task-service has been cleanup successfully" Nov 8 00:22:25.704307 kubelet[2509]: I1108 00:22:25.704252 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Nov 8 00:22:25.714325 containerd[1469]: time="2025-11-08T00:22:25.714264950Z" level=info msg="StopPodSandbox for \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\"" Nov 8 00:22:25.714613 containerd[1469]: time="2025-11-08T00:22:25.714577468Z" level=info msg="Ensure that sandbox 2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7 in task-service has been cleanup successfully" Nov 8 00:22:25.717400 kubelet[2509]: I1108 00:22:25.717356 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Nov 8 00:22:25.721314 containerd[1469]: time="2025-11-08T00:22:25.721133004Z" level=info msg="StopPodSandbox for \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\"" Nov 8 00:22:25.723546 containerd[1469]: time="2025-11-08T00:22:25.722725605Z" level=info msg="Ensure that sandbox 02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97 in task-service has been cleanup successfully" Nov 8 00:22:25.729904 kubelet[2509]: I1108 00:22:25.729515 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Nov 8 00:22:25.739816 containerd[1469]: time="2025-11-08T00:22:25.739744490Z" level=info msg="StopPodSandbox for \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\"" Nov 8 00:22:25.740753 containerd[1469]: time="2025-11-08T00:22:25.740602203Z" level=info msg="Ensure that sandbox 9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a in task-service has been cleanup successfully" Nov 8 00:22:25.744936 kubelet[2509]: I1108 00:22:25.744887 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Nov 8 00:22:25.755125 containerd[1469]: time="2025-11-08T00:22:25.750536356Z" level=info msg="StopPodSandbox for \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\"" Nov 8 00:22:25.759604 containerd[1469]: time="2025-11-08T00:22:25.759141719Z" level=info msg="Ensure that sandbox 881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070 in task-service has been cleanup successfully" Nov 8 00:22:25.897554 containerd[1469]: time="2025-11-08T00:22:25.897467667Z" level=error msg="StopPodSandbox for \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\" failed" error="failed to destroy network for sandbox \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.897874 kubelet[2509]: E1108 00:22:25.897770 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Nov 8 00:22:25.897874 kubelet[2509]: E1108 00:22:25.897841 2509 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21"} Nov 8 00:22:25.898403 kubelet[2509]: E1108 00:22:25.897912 2509 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3fc490ed-6d34-41fd-bb44-ba621857b51e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:22:25.898403 kubelet[2509]: E1108 00:22:25.897940 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3fc490ed-6d34-41fd-bb44-ba621857b51e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-7jxzl" podUID="3fc490ed-6d34-41fd-bb44-ba621857b51e" Nov 8 00:22:25.946729 containerd[1469]: time="2025-11-08T00:22:25.946379300Z" level=error msg="StopPodSandbox for \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\" failed" error="failed to destroy network for sandbox \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.947100 kubelet[2509]: E1108 00:22:25.946708 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Nov 8 00:22:25.947100 kubelet[2509]: E1108 00:22:25.946782 2509 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6"} Nov 8 00:22:25.947100 kubelet[2509]: E1108 00:22:25.946817 2509 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f6d4dc87-9d2e-4afc-ab03-361e2e8d6f52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:22:25.947100 kubelet[2509]: E1108 00:22:25.946841 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f6d4dc87-9d2e-4afc-ab03-361e2e8d6f52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rs7hj" podUID="f6d4dc87-9d2e-4afc-ab03-361e2e8d6f52" Nov 8 00:22:25.959664 containerd[1469]: time="2025-11-08T00:22:25.959472757Z" level=error msg="StopPodSandbox for \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\" failed" error="failed to destroy network for sandbox \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.962163 kubelet[2509]: E1108 00:22:25.961115 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Nov 8 00:22:25.962163 kubelet[2509]: E1108 00:22:25.961293 2509 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5"} Nov 8 00:22:25.962163 kubelet[2509]: E1108 00:22:25.961355 2509 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31b491a3-55cf-4c4e-922c-621192b0de8f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:22:25.962163 kubelet[2509]: E1108 00:22:25.961392 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31b491a3-55cf-4c4e-922c-621192b0de8f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58dd75b54-s7bcs" podUID="31b491a3-55cf-4c4e-922c-621192b0de8f" Nov 8 00:22:25.964963 containerd[1469]: time="2025-11-08T00:22:25.964240466Z" level=error msg="StopPodSandbox for \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\" failed" error="failed to destroy network for sandbox \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.965246 kubelet[2509]: E1108 00:22:25.964711 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Nov 8 00:22:25.965246 kubelet[2509]: E1108 00:22:25.964793 2509 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7"} Nov 8 00:22:25.965246 kubelet[2509]: E1108 00:22:25.964849 2509 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"560dd5bf-b92f-472c-9028-b374dabf58bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:22:25.965246 kubelet[2509]: E1108 00:22:25.964885 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"560dd5bf-b92f-472c-9028-b374dabf58bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c57565dbd-rlbqk" podUID="560dd5bf-b92f-472c-9028-b374dabf58bb" Nov 8 00:22:25.968464 containerd[1469]: time="2025-11-08T00:22:25.968119767Z" level=error msg="StopPodSandbox for \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\" failed" error="failed to destroy network for sandbox \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.969272 kubelet[2509]: E1108 00:22:25.969015 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Nov 8 00:22:25.969272 kubelet[2509]: E1108 00:22:25.969103 2509 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9"} Nov 8 00:22:25.969272 kubelet[2509]: E1108 00:22:25.969160 2509 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42409f77-f298-4938-9e62-f71427e3d95e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:22:25.969272 kubelet[2509]: E1108 00:22:25.969219 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42409f77-f298-4938-9e62-f71427e3d95e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58dd75b54-57vp2" podUID="42409f77-f298-4938-9e62-f71427e3d95e" Nov 8 00:22:25.970478 containerd[1469]: time="2025-11-08T00:22:25.969679598Z" level=error msg="StopPodSandbox for \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\" failed" error="failed to destroy network for sandbox \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.971429 kubelet[2509]: E1108 00:22:25.970770 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Nov 8 00:22:25.971429 kubelet[2509]: E1108 00:22:25.970875 2509 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a"} Nov 8 00:22:25.971429 kubelet[2509]: E1108 00:22:25.970931 2509 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"daa9f29d-2835-4e9f-8181-7aeaf654817a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:22:25.971429 kubelet[2509]: E1108 00:22:25.970970 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"daa9f29d-2835-4e9f-8181-7aeaf654817a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7q5q2" podUID="daa9f29d-2835-4e9f-8181-7aeaf654817a" Nov 8 00:22:25.979119 containerd[1469]: time="2025-11-08T00:22:25.978927450Z" level=error msg="StopPodSandbox for \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\" failed" error="failed to destroy network for sandbox \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.980523 kubelet[2509]: E1108 00:22:25.980435 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Nov 8 00:22:25.981233 kubelet[2509]: E1108 00:22:25.980673 2509 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97"} Nov 8 00:22:25.981233 kubelet[2509]: E1108 00:22:25.981036 2509 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"89a1f27b-cb85-45f6-a4b2-8e67e3f028ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:22:25.981233 kubelet[2509]: E1108 00:22:25.981166 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"89a1f27b-cb85-45f6-a4b2-8e67e3f028ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jjkjr" podUID="89a1f27b-cb85-45f6-a4b2-8e67e3f028ce" Nov 8 00:22:25.988872 containerd[1469]: time="2025-11-08T00:22:25.988592282Z" level=error msg="StopPodSandbox for \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\" failed" error="failed to destroy network for sandbox \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:22:25.989163 kubelet[2509]: E1108 00:22:25.988956 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Nov 8 00:22:25.989163 kubelet[2509]: E1108 00:22:25.989041 2509 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070"} Nov 8 00:22:25.989163 kubelet[2509]: E1108 00:22:25.989096 2509 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"52c7cb07-da27-4536-ac57-e79e518b03ff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:22:25.989163 kubelet[2509]: E1108 00:22:25.989135 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"52c7cb07-da27-4536-ac57-e79e518b03ff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8dcf856dd-sg2q5" podUID="52c7cb07-da27-4536-ac57-e79e518b03ff" Nov 8 00:22:31.673856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1673447364.mount: Deactivated successfully. Nov 8 00:22:31.788994 containerd[1469]: time="2025-11-08T00:22:31.788898129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.137576606s" Nov 8 00:22:31.790278 containerd[1469]: time="2025-11-08T00:22:31.789719612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:22:31.804364 containerd[1469]: time="2025-11-08T00:22:31.789901639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:22:31.804659 containerd[1469]: time="2025-11-08T00:22:31.794235639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:31.859075 containerd[1469]: time="2025-11-08T00:22:31.858887008Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:31.861233 containerd[1469]: time="2025-11-08T00:22:31.861158662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:22:31.902003 containerd[1469]: time="2025-11-08T00:22:31.901886193Z" level=info msg="CreateContainer within sandbox \"fc61791503e7ad19bdbb5a5111cc6b1b9397881b4bc8c708f77a2b99804d7680\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:22:32.026784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3905212160.mount: Deactivated successfully. Nov 8 00:22:32.067127 containerd[1469]: time="2025-11-08T00:22:32.067047962Z" level=info msg="CreateContainer within sandbox \"fc61791503e7ad19bdbb5a5111cc6b1b9397881b4bc8c708f77a2b99804d7680\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bc76c4d57486f40fb92f4777941862e257fba069ea93a697ea2101ff99a6e255\"" Nov 8 00:22:32.072784 containerd[1469]: time="2025-11-08T00:22:32.072525896Z" level=info msg="StartContainer for \"bc76c4d57486f40fb92f4777941862e257fba069ea93a697ea2101ff99a6e255\"" Nov 8 00:22:32.213494 systemd[1]: Started cri-containerd-bc76c4d57486f40fb92f4777941862e257fba069ea93a697ea2101ff99a6e255.scope - libcontainer container bc76c4d57486f40fb92f4777941862e257fba069ea93a697ea2101ff99a6e255. Nov 8 00:22:32.273243 containerd[1469]: time="2025-11-08T00:22:32.273119938Z" level=info msg="StartContainer for \"bc76c4d57486f40fb92f4777941862e257fba069ea93a697ea2101ff99a6e255\" returns successfully" Nov 8 00:22:32.446429 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:22:32.446668 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:22:32.666547 containerd[1469]: time="2025-11-08T00:22:32.666497740Z" level=info msg="StopPodSandbox for \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\"" Nov 8 00:22:32.869696 kubelet[2509]: E1108 00:22:32.867013 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:32.954183 kubelet[2509]: I1108 00:22:32.944794 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8fz6q" podStartSLOduration=1.7364042020000001 podStartE2EDuration="19.916620253s" podCreationTimestamp="2025-11-08 00:22:13 +0000 UTC" firstStartedPulling="2025-11-08 00:22:13.615117603 +0000 UTC m=+29.411217861" lastFinishedPulling="2025-11-08 00:22:31.795333657 +0000 UTC m=+47.591433912" observedRunningTime="2025-11-08 00:22:32.910710891 +0000 UTC m=+48.706811153" watchObservedRunningTime="2025-11-08 00:22:32.916620253 +0000 UTC m=+48.712720516" Nov 8 00:22:32.992490 systemd[1]: run-containerd-runc-k8s.io-bc76c4d57486f40fb92f4777941862e257fba069ea93a697ea2101ff99a6e255-runc.ALhf7f.mount: Deactivated successfully. Nov 8 00:22:33.203908 containerd[1469]: 2025-11-08 00:22:32.846 [INFO][3710] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Nov 8 00:22:33.203908 containerd[1469]: 2025-11-08 00:22:32.847 [INFO][3710] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" iface="eth0" netns="/var/run/netns/cni-7b74c6c9-f6a3-835c-d799-1b40e863e755" Nov 8 00:22:33.203908 containerd[1469]: 2025-11-08 00:22:32.847 [INFO][3710] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" iface="eth0" netns="/var/run/netns/cni-7b74c6c9-f6a3-835c-d799-1b40e863e755" Nov 8 00:22:33.203908 containerd[1469]: 2025-11-08 00:22:32.849 [INFO][3710] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" iface="eth0" netns="/var/run/netns/cni-7b74c6c9-f6a3-835c-d799-1b40e863e755" Nov 8 00:22:33.203908 containerd[1469]: 2025-11-08 00:22:32.849 [INFO][3710] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Nov 8 00:22:33.203908 containerd[1469]: 2025-11-08 00:22:32.849 [INFO][3710] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Nov 8 00:22:33.203908 containerd[1469]: 2025-11-08 00:22:33.168 [INFO][3722] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" HandleID="k8s-pod-network.881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Workload="ci--4081.3.6--n--6d313a6df2-k8s-whisker--8dcf856dd--sg2q5-eth0" Nov 8 00:22:33.203908 containerd[1469]: 2025-11-08 00:22:33.169 [INFO][3722] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:33.203908 containerd[1469]: 2025-11-08 00:22:33.170 [INFO][3722] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:33.203908 containerd[1469]: 2025-11-08 00:22:33.186 [WARNING][3722] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" HandleID="k8s-pod-network.881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Workload="ci--4081.3.6--n--6d313a6df2-k8s-whisker--8dcf856dd--sg2q5-eth0" Nov 8 00:22:33.203908 containerd[1469]: 2025-11-08 00:22:33.186 [INFO][3722] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" HandleID="k8s-pod-network.881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Workload="ci--4081.3.6--n--6d313a6df2-k8s-whisker--8dcf856dd--sg2q5-eth0" Nov 8 00:22:33.203908 containerd[1469]: 2025-11-08 00:22:33.190 [INFO][3722] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:33.203908 containerd[1469]: 2025-11-08 00:22:33.197 [INFO][3710] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Nov 8 00:22:33.207742 containerd[1469]: time="2025-11-08T00:22:33.204021677Z" level=info msg="TearDown network for sandbox \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\" successfully" Nov 8 00:22:33.207742 containerd[1469]: time="2025-11-08T00:22:33.204068418Z" level=info msg="StopPodSandbox for \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\" returns successfully" Nov 8 00:22:33.214753 systemd[1]: run-netns-cni\x2d7b74c6c9\x2df6a3\x2d835c\x2dd799\x2d1b40e863e755.mount: Deactivated successfully. Nov 8 00:22:33.337581 kubelet[2509]: I1108 00:22:33.337379 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/52c7cb07-da27-4536-ac57-e79e518b03ff-whisker-backend-key-pair\") pod \"52c7cb07-da27-4536-ac57-e79e518b03ff\" (UID: \"52c7cb07-da27-4536-ac57-e79e518b03ff\") " Nov 8 00:22:33.350569 kubelet[2509]: I1108 00:22:33.350434 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52c7cb07-da27-4536-ac57-e79e518b03ff-whisker-ca-bundle\") pod \"52c7cb07-da27-4536-ac57-e79e518b03ff\" (UID: \"52c7cb07-da27-4536-ac57-e79e518b03ff\") " Nov 8 00:22:33.352145 kubelet[2509]: I1108 00:22:33.351075 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxrcw\" (UniqueName: \"kubernetes.io/projected/52c7cb07-da27-4536-ac57-e79e518b03ff-kube-api-access-qxrcw\") pod \"52c7cb07-da27-4536-ac57-e79e518b03ff\" (UID: \"52c7cb07-da27-4536-ac57-e79e518b03ff\") " Nov 8 00:22:33.359406 kubelet[2509]: I1108 00:22:33.353535 2509 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52c7cb07-da27-4536-ac57-e79e518b03ff-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "52c7cb07-da27-4536-ac57-e79e518b03ff" (UID: "52c7cb07-da27-4536-ac57-e79e518b03ff"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:22:33.369371 systemd[1]: var-lib-kubelet-pods-52c7cb07\x2dda27\x2d4536\x2dac57\x2de79e518b03ff-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:22:33.373529 kubelet[2509]: I1108 00:22:33.373467 2509 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52c7cb07-da27-4536-ac57-e79e518b03ff-kube-api-access-qxrcw" (OuterVolumeSpecName: "kube-api-access-qxrcw") pod "52c7cb07-da27-4536-ac57-e79e518b03ff" (UID: "52c7cb07-da27-4536-ac57-e79e518b03ff"). InnerVolumeSpecName "kube-api-access-qxrcw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:22:33.374460 kubelet[2509]: I1108 00:22:33.374396 2509 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c7cb07-da27-4536-ac57-e79e518b03ff-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "52c7cb07-da27-4536-ac57-e79e518b03ff" (UID: "52c7cb07-da27-4536-ac57-e79e518b03ff"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:22:33.458016 kubelet[2509]: I1108 00:22:33.457650 2509 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qxrcw\" (UniqueName: \"kubernetes.io/projected/52c7cb07-da27-4536-ac57-e79e518b03ff-kube-api-access-qxrcw\") on node \"ci-4081.3.6-n-6d313a6df2\" DevicePath \"\"" Nov 8 00:22:33.458016 kubelet[2509]: I1108 00:22:33.457881 2509 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/52c7cb07-da27-4536-ac57-e79e518b03ff-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-6d313a6df2\" DevicePath \"\"" Nov 8 00:22:33.458016 kubelet[2509]: I1108 00:22:33.457901 2509 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52c7cb07-da27-4536-ac57-e79e518b03ff-whisker-ca-bundle\") on node \"ci-4081.3.6-n-6d313a6df2\" DevicePath \"\"" Nov 8 00:22:33.672365 systemd[1]: var-lib-kubelet-pods-52c7cb07\x2dda27\x2d4536\x2dac57\x2de79e518b03ff-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqxrcw.mount: Deactivated successfully. Nov 8 00:22:33.873525 kubelet[2509]: E1108 00:22:33.873375 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:33.892730 systemd[1]: Removed slice kubepods-besteffort-pod52c7cb07_da27_4536_ac57_e79e518b03ff.slice - libcontainer container kubepods-besteffort-pod52c7cb07_da27_4536_ac57_e79e518b03ff.slice. Nov 8 00:22:34.015490 systemd[1]: Created slice kubepods-besteffort-pod6e6990fb_3126_46c4_96c6_a63ad2a68c21.slice - libcontainer container kubepods-besteffort-pod6e6990fb_3126_46c4_96c6_a63ad2a68c21.slice. Nov 8 00:22:34.064237 kubelet[2509]: I1108 00:22:34.064126 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9ck8\" (UniqueName: \"kubernetes.io/projected/6e6990fb-3126-46c4-96c6-a63ad2a68c21-kube-api-access-h9ck8\") pod \"whisker-97778988b-6hzb4\" (UID: \"6e6990fb-3126-46c4-96c6-a63ad2a68c21\") " pod="calico-system/whisker-97778988b-6hzb4" Nov 8 00:22:34.064237 kubelet[2509]: I1108 00:22:34.064219 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6e6990fb-3126-46c4-96c6-a63ad2a68c21-whisker-backend-key-pair\") pod \"whisker-97778988b-6hzb4\" (UID: \"6e6990fb-3126-46c4-96c6-a63ad2a68c21\") " pod="calico-system/whisker-97778988b-6hzb4" Nov 8 00:22:34.064503 kubelet[2509]: I1108 00:22:34.064255 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e6990fb-3126-46c4-96c6-a63ad2a68c21-whisker-ca-bundle\") pod \"whisker-97778988b-6hzb4\" (UID: \"6e6990fb-3126-46c4-96c6-a63ad2a68c21\") " pod="calico-system/whisker-97778988b-6hzb4" Nov 8 00:22:34.341475 containerd[1469]: time="2025-11-08T00:22:34.341353239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-97778988b-6hzb4,Uid:6e6990fb-3126-46c4-96c6-a63ad2a68c21,Namespace:calico-system,Attempt:0,}" Nov 8 00:22:34.380762 kubelet[2509]: I1108 00:22:34.380160 2509 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52c7cb07-da27-4536-ac57-e79e518b03ff" path="/var/lib/kubelet/pods/52c7cb07-da27-4536-ac57-e79e518b03ff/volumes" Nov 8 00:22:34.640508 systemd-networkd[1350]: cali68114c49d71: Link UP Nov 8 00:22:34.646973 systemd-networkd[1350]: cali68114c49d71: Gained carrier Nov 8 00:22:34.721686 containerd[1469]: 2025-11-08 00:22:34.418 [INFO][3788] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:22:34.721686 containerd[1469]: 2025-11-08 00:22:34.437 [INFO][3788] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--6d313a6df2-k8s-whisker--97778988b--6hzb4-eth0 whisker-97778988b- calico-system 6e6990fb-3126-46c4-96c6-a63ad2a68c21 925 0 2025-11-08 00:22:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:97778988b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-6d313a6df2 whisker-97778988b-6hzb4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali68114c49d71 [] [] }} ContainerID="84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de" Namespace="calico-system" Pod="whisker-97778988b-6hzb4" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-whisker--97778988b--6hzb4-" Nov 8 00:22:34.721686 containerd[1469]: 2025-11-08 00:22:34.437 [INFO][3788] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de" Namespace="calico-system" Pod="whisker-97778988b-6hzb4" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-whisker--97778988b--6hzb4-eth0" Nov 8 00:22:34.721686 containerd[1469]: 2025-11-08 00:22:34.509 [INFO][3801] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de" HandleID="k8s-pod-network.84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de" Workload="ci--4081.3.6--n--6d313a6df2-k8s-whisker--97778988b--6hzb4-eth0" Nov 8 00:22:34.721686 containerd[1469]: 2025-11-08 00:22:34.510 [INFO][3801] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de" HandleID="k8s-pod-network.84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de" Workload="ci--4081.3.6--n--6d313a6df2-k8s-whisker--97778988b--6hzb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-6d313a6df2", "pod":"whisker-97778988b-6hzb4", "timestamp":"2025-11-08 00:22:34.509796443 +0000 UTC"}, Hostname:"ci-4081.3.6-n-6d313a6df2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:22:34.721686 containerd[1469]: 2025-11-08 00:22:34.510 [INFO][3801] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:34.721686 containerd[1469]: 2025-11-08 00:22:34.511 [INFO][3801] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:34.721686 containerd[1469]: 2025-11-08 00:22:34.511 [INFO][3801] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-6d313a6df2' Nov 8 00:22:34.721686 containerd[1469]: 2025-11-08 00:22:34.529 [INFO][3801] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:34.721686 containerd[1469]: 2025-11-08 00:22:34.546 [INFO][3801] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:34.721686 containerd[1469]: 2025-11-08 00:22:34.559 [INFO][3801] ipam/ipam.go 511: Trying affinity for 192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:34.721686 containerd[1469]: 2025-11-08 00:22:34.564 [INFO][3801] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:34.721686 containerd[1469]: 2025-11-08 00:22:34.570 [INFO][3801] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:34.721686 containerd[1469]: 2025-11-08 00:22:34.571 [INFO][3801] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:34.721686 containerd[1469]: 2025-11-08 00:22:34.576 [INFO][3801] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de Nov 8 00:22:34.721686 containerd[1469]: 2025-11-08 00:22:34.585 [INFO][3801] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:34.721686 containerd[1469]: 2025-11-08 00:22:34.599 [INFO][3801] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.193/26] block=192.168.45.192/26 handle="k8s-pod-network.84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:34.721686 containerd[1469]: 2025-11-08 00:22:34.599 [INFO][3801] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.193/26] handle="k8s-pod-network.84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:34.721686 containerd[1469]: 2025-11-08 00:22:34.599 [INFO][3801] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:34.721686 containerd[1469]: 2025-11-08 00:22:34.599 [INFO][3801] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.193/26] IPv6=[] ContainerID="84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de" HandleID="k8s-pod-network.84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de" Workload="ci--4081.3.6--n--6d313a6df2-k8s-whisker--97778988b--6hzb4-eth0" Nov 8 00:22:34.728524 containerd[1469]: 2025-11-08 00:22:34.607 [INFO][3788] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de" Namespace="calico-system" Pod="whisker-97778988b-6hzb4" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-whisker--97778988b--6hzb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-whisker--97778988b--6hzb4-eth0", GenerateName:"whisker-97778988b-", Namespace:"calico-system", SelfLink:"", UID:"6e6990fb-3126-46c4-96c6-a63ad2a68c21", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"97778988b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"", Pod:"whisker-97778988b-6hzb4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.45.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali68114c49d71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:34.728524 containerd[1469]: 2025-11-08 00:22:34.607 [INFO][3788] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.193/32] ContainerID="84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de" Namespace="calico-system" Pod="whisker-97778988b-6hzb4" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-whisker--97778988b--6hzb4-eth0" Nov 8 00:22:34.728524 containerd[1469]: 2025-11-08 00:22:34.607 [INFO][3788] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68114c49d71 ContainerID="84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de" Namespace="calico-system" Pod="whisker-97778988b-6hzb4" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-whisker--97778988b--6hzb4-eth0" Nov 8 00:22:34.728524 containerd[1469]: 2025-11-08 00:22:34.651 [INFO][3788] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de" Namespace="calico-system" Pod="whisker-97778988b-6hzb4" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-whisker--97778988b--6hzb4-eth0" Nov 8 00:22:34.728524 containerd[1469]: 2025-11-08 00:22:34.655 [INFO][3788] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de" Namespace="calico-system" Pod="whisker-97778988b-6hzb4" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-whisker--97778988b--6hzb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-whisker--97778988b--6hzb4-eth0", GenerateName:"whisker-97778988b-", Namespace:"calico-system", SelfLink:"", UID:"6e6990fb-3126-46c4-96c6-a63ad2a68c21", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"97778988b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de", Pod:"whisker-97778988b-6hzb4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.45.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali68114c49d71", MAC:"4a:7c:75:5d:c2:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:34.728524 containerd[1469]: 2025-11-08 00:22:34.716 [INFO][3788] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de" Namespace="calico-system" Pod="whisker-97778988b-6hzb4" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-whisker--97778988b--6hzb4-eth0" Nov 8 00:22:34.823777 containerd[1469]: time="2025-11-08T00:22:34.823397721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:22:34.823777 containerd[1469]: time="2025-11-08T00:22:34.823480427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:22:34.823777 containerd[1469]: time="2025-11-08T00:22:34.823497226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:34.823777 containerd[1469]: time="2025-11-08T00:22:34.823623188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:34.875930 kubelet[2509]: E1108 00:22:34.875889 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:34.881868 systemd[1]: Started cri-containerd-84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de.scope - libcontainer container 84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de. Nov 8 00:22:35.001184 systemd[1]: run-containerd-runc-k8s.io-bc76c4d57486f40fb92f4777941862e257fba069ea93a697ea2101ff99a6e255-runc.cxVvuA.mount: Deactivated successfully. Nov 8 00:22:35.315139 containerd[1469]: time="2025-11-08T00:22:35.314919327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-97778988b-6hzb4,Uid:6e6990fb-3126-46c4-96c6-a63ad2a68c21,Namespace:calico-system,Attempt:0,} returns sandbox id \"84e18f2d15f3e43801ab344a5fe11871aa663a631ca0e7a9d186c7d7c34a86de\"" Nov 8 00:22:35.339371 containerd[1469]: time="2025-11-08T00:22:35.339267552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:22:35.702120 containerd[1469]: time="2025-11-08T00:22:35.702044442Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:35.726582 containerd[1469]: time="2025-11-08T00:22:35.703729056Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:22:35.726829 containerd[1469]: time="2025-11-08T00:22:35.704133127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:22:35.732895 kubelet[2509]: E1108 00:22:35.732590 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:22:35.733757 kubelet[2509]: E1108 00:22:35.733572 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:22:35.756967 kubelet[2509]: E1108 00:22:35.746870 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cfa83883e8f7431cbc54801fd68dfa44,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h9ck8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-97778988b-6hzb4_calico-system(6e6990fb-3126-46c4-96c6-a63ad2a68c21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:35.760465 containerd[1469]: time="2025-11-08T00:22:35.760312131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:22:36.110547 containerd[1469]: time="2025-11-08T00:22:36.110422323Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:36.113902 containerd[1469]: time="2025-11-08T00:22:36.112967022Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:22:36.113902 containerd[1469]: time="2025-11-08T00:22:36.113050963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:22:36.114173 kubelet[2509]: E1108 00:22:36.114009 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:22:36.114173 kubelet[2509]: E1108 00:22:36.114106 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:22:36.114835 kubelet[2509]: E1108 00:22:36.114348 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h9ck8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-97778988b-6hzb4_calico-system(6e6990fb-3126-46c4-96c6-a63ad2a68c21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:36.115997 kubelet[2509]: E1108 00:22:36.115932 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-97778988b-6hzb4" podUID="6e6990fb-3126-46c4-96c6-a63ad2a68c21" Nov 8 00:22:36.129392 systemd-networkd[1350]: cali68114c49d71: Gained IPv6LL Nov 8 00:22:36.366555 containerd[1469]: time="2025-11-08T00:22:36.365281213Z" level=info msg="StopPodSandbox for \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\"" Nov 8 00:22:36.526706 containerd[1469]: 2025-11-08 00:22:36.451 [INFO][3982] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Nov 8 00:22:36.526706 containerd[1469]: 2025-11-08 00:22:36.452 [INFO][3982] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" iface="eth0" netns="/var/run/netns/cni-86a0a453-2fd4-ee3b-f4fe-68307bf93d05" Nov 8 00:22:36.526706 containerd[1469]: 2025-11-08 00:22:36.453 [INFO][3982] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" iface="eth0" netns="/var/run/netns/cni-86a0a453-2fd4-ee3b-f4fe-68307bf93d05" Nov 8 00:22:36.526706 containerd[1469]: 2025-11-08 00:22:36.456 [INFO][3982] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" iface="eth0" netns="/var/run/netns/cni-86a0a453-2fd4-ee3b-f4fe-68307bf93d05" Nov 8 00:22:36.526706 containerd[1469]: 2025-11-08 00:22:36.456 [INFO][3982] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Nov 8 00:22:36.526706 containerd[1469]: 2025-11-08 00:22:36.456 [INFO][3982] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Nov 8 00:22:36.526706 containerd[1469]: 2025-11-08 00:22:36.504 [INFO][3989] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" HandleID="k8s-pod-network.41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Workload="ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0" Nov 8 00:22:36.526706 containerd[1469]: 2025-11-08 00:22:36.505 [INFO][3989] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:36.526706 containerd[1469]: 2025-11-08 00:22:36.505 [INFO][3989] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:36.526706 containerd[1469]: 2025-11-08 00:22:36.515 [WARNING][3989] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" HandleID="k8s-pod-network.41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Workload="ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0" Nov 8 00:22:36.526706 containerd[1469]: 2025-11-08 00:22:36.515 [INFO][3989] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" HandleID="k8s-pod-network.41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Workload="ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0" Nov 8 00:22:36.526706 containerd[1469]: 2025-11-08 00:22:36.518 [INFO][3989] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:36.526706 containerd[1469]: 2025-11-08 00:22:36.522 [INFO][3982] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Nov 8 00:22:36.528517 containerd[1469]: time="2025-11-08T00:22:36.528449787Z" level=info msg="TearDown network for sandbox \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\" successfully" Nov 8 00:22:36.528517 containerd[1469]: time="2025-11-08T00:22:36.528510345Z" level=info msg="StopPodSandbox for \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\" returns successfully" Nov 8 00:22:36.534134 systemd[1]: run-netns-cni\x2d86a0a453\x2d2fd4\x2dee3b\x2df4fe\x2d68307bf93d05.mount: Deactivated successfully. Nov 8 00:22:36.536894 containerd[1469]: time="2025-11-08T00:22:36.535682711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7jxzl,Uid:3fc490ed-6d34-41fd-bb44-ba621857b51e,Namespace:calico-system,Attempt:1,}" Nov 8 00:22:36.879561 systemd-networkd[1350]: calif1a81badbf4: Link UP Nov 8 00:22:36.882031 systemd-networkd[1350]: calif1a81badbf4: Gained carrier Nov 8 00:22:36.909532 kubelet[2509]: E1108 00:22:36.909442 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-97778988b-6hzb4" podUID="6e6990fb-3126-46c4-96c6-a63ad2a68c21" Nov 8 00:22:36.958736 containerd[1469]: 2025-11-08 00:22:36.693 [INFO][3996] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:22:36.958736 containerd[1469]: 2025-11-08 00:22:36.715 [INFO][3996] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0 goldmane-666569f655- calico-system 3fc490ed-6d34-41fd-bb44-ba621857b51e 943 0 2025-11-08 00:22:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-6d313a6df2 goldmane-666569f655-7jxzl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif1a81badbf4 [] [] }} ContainerID="2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238" Namespace="calico-system" Pod="goldmane-666569f655-7jxzl" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-" Nov 8 00:22:36.958736 containerd[1469]: 2025-11-08 00:22:36.715 [INFO][3996] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238" Namespace="calico-system" Pod="goldmane-666569f655-7jxzl" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0" Nov 8 00:22:36.958736 containerd[1469]: 2025-11-08 00:22:36.789 [INFO][4015] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238" HandleID="k8s-pod-network.2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238" Workload="ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0" Nov 8 00:22:36.958736 containerd[1469]: 2025-11-08 00:22:36.791 [INFO][4015] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238" HandleID="k8s-pod-network.2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238" Workload="ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac150), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-6d313a6df2", "pod":"goldmane-666569f655-7jxzl", "timestamp":"2025-11-08 00:22:36.789963342 +0000 UTC"}, Hostname:"ci-4081.3.6-n-6d313a6df2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:22:36.958736 containerd[1469]: 2025-11-08 00:22:36.791 [INFO][4015] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:36.958736 containerd[1469]: 2025-11-08 00:22:36.791 [INFO][4015] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:36.958736 containerd[1469]: 2025-11-08 00:22:36.791 [INFO][4015] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-6d313a6df2' Nov 8 00:22:36.958736 containerd[1469]: 2025-11-08 00:22:36.804 [INFO][4015] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:36.958736 containerd[1469]: 2025-11-08 00:22:36.814 [INFO][4015] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:36.958736 containerd[1469]: 2025-11-08 00:22:36.824 [INFO][4015] ipam/ipam.go 511: Trying affinity for 192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:36.958736 containerd[1469]: 2025-11-08 00:22:36.829 [INFO][4015] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:36.958736 containerd[1469]: 2025-11-08 00:22:36.832 [INFO][4015] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:36.958736 containerd[1469]: 2025-11-08 00:22:36.833 [INFO][4015] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:36.958736 containerd[1469]: 2025-11-08 00:22:36.837 [INFO][4015] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238 Nov 8 00:22:36.958736 containerd[1469]: 2025-11-08 00:22:36.846 [INFO][4015] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:36.958736 containerd[1469]: 2025-11-08 00:22:36.865 [INFO][4015] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.194/26] block=192.168.45.192/26 handle="k8s-pod-network.2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:36.958736 containerd[1469]: 2025-11-08 00:22:36.865 [INFO][4015] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.194/26] handle="k8s-pod-network.2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:36.958736 containerd[1469]: 2025-11-08 00:22:36.865 [INFO][4015] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:36.958736 containerd[1469]: 2025-11-08 00:22:36.865 [INFO][4015] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.194/26] IPv6=[] ContainerID="2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238" HandleID="k8s-pod-network.2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238" Workload="ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0" Nov 8 00:22:36.962587 containerd[1469]: 2025-11-08 00:22:36.871 [INFO][3996] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238" Namespace="calico-system" Pod="goldmane-666569f655-7jxzl" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3fc490ed-6d34-41fd-bb44-ba621857b51e", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"", Pod:"goldmane-666569f655-7jxzl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.45.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif1a81badbf4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:36.962587 containerd[1469]: 2025-11-08 00:22:36.871 [INFO][3996] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.194/32] ContainerID="2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238" Namespace="calico-system" Pod="goldmane-666569f655-7jxzl" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0" Nov 8 00:22:36.962587 containerd[1469]: 2025-11-08 00:22:36.871 [INFO][3996] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1a81badbf4 ContainerID="2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238" Namespace="calico-system" Pod="goldmane-666569f655-7jxzl" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0" Nov 8 00:22:36.962587 containerd[1469]: 2025-11-08 00:22:36.887 [INFO][3996] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238" Namespace="calico-system" Pod="goldmane-666569f655-7jxzl" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0" Nov 8 00:22:36.962587 containerd[1469]: 2025-11-08 00:22:36.907 [INFO][3996] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238" Namespace="calico-system" Pod="goldmane-666569f655-7jxzl" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3fc490ed-6d34-41fd-bb44-ba621857b51e", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238", Pod:"goldmane-666569f655-7jxzl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.45.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif1a81badbf4", MAC:"fe:78:53:d3:57:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:36.962587 containerd[1469]: 2025-11-08 00:22:36.944 [INFO][3996] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238" Namespace="calico-system" Pod="goldmane-666569f655-7jxzl" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0" Nov 8 00:22:37.041288 containerd[1469]: time="2025-11-08T00:22:37.040676967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:22:37.041593 containerd[1469]: time="2025-11-08T00:22:37.041365006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:22:37.041679 containerd[1469]: time="2025-11-08T00:22:37.041536269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:37.044429 containerd[1469]: time="2025-11-08T00:22:37.042427340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:37.111663 systemd[1]: Started cri-containerd-2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238.scope - libcontainer container 2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238. Nov 8 00:22:37.308400 containerd[1469]: time="2025-11-08T00:22:37.308144034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7jxzl,Uid:3fc490ed-6d34-41fd-bb44-ba621857b51e,Namespace:calico-system,Attempt:1,} returns sandbox id \"2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238\"" Nov 8 00:22:37.313252 containerd[1469]: time="2025-11-08T00:22:37.312608624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:22:37.368739 containerd[1469]: time="2025-11-08T00:22:37.368381752Z" level=info msg="StopPodSandbox for \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\"" Nov 8 00:22:37.571345 containerd[1469]: 2025-11-08 00:22:37.495 [INFO][4090] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Nov 8 00:22:37.571345 containerd[1469]: 2025-11-08 00:22:37.495 [INFO][4090] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" iface="eth0" netns="/var/run/netns/cni-d7eaaeb7-7b7f-14e3-6675-e8ba309b3cc8" Nov 8 00:22:37.571345 containerd[1469]: 2025-11-08 00:22:37.495 [INFO][4090] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" iface="eth0" netns="/var/run/netns/cni-d7eaaeb7-7b7f-14e3-6675-e8ba309b3cc8" Nov 8 00:22:37.571345 containerd[1469]: 2025-11-08 00:22:37.495 [INFO][4090] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" iface="eth0" netns="/var/run/netns/cni-d7eaaeb7-7b7f-14e3-6675-e8ba309b3cc8" Nov 8 00:22:37.571345 containerd[1469]: 2025-11-08 00:22:37.496 [INFO][4090] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Nov 8 00:22:37.571345 containerd[1469]: 2025-11-08 00:22:37.496 [INFO][4090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Nov 8 00:22:37.571345 containerd[1469]: 2025-11-08 00:22:37.546 [INFO][4098] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" HandleID="k8s-pod-network.9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Workload="ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0" Nov 8 00:22:37.571345 containerd[1469]: 2025-11-08 00:22:37.546 [INFO][4098] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:37.571345 containerd[1469]: 2025-11-08 00:22:37.546 [INFO][4098] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:37.571345 containerd[1469]: 2025-11-08 00:22:37.557 [WARNING][4098] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" HandleID="k8s-pod-network.9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Workload="ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0" Nov 8 00:22:37.571345 containerd[1469]: 2025-11-08 00:22:37.557 [INFO][4098] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" HandleID="k8s-pod-network.9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Workload="ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0" Nov 8 00:22:37.571345 containerd[1469]: 2025-11-08 00:22:37.560 [INFO][4098] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:37.571345 containerd[1469]: 2025-11-08 00:22:37.564 [INFO][4090] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Nov 8 00:22:37.572209 containerd[1469]: time="2025-11-08T00:22:37.571431174Z" level=info msg="TearDown network for sandbox \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\" successfully" Nov 8 00:22:37.572209 containerd[1469]: time="2025-11-08T00:22:37.571486159Z" level=info msg="StopPodSandbox for \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\" returns successfully" Nov 8 00:22:37.577436 containerd[1469]: time="2025-11-08T00:22:37.574152429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7q5q2,Uid:daa9f29d-2835-4e9f-8181-7aeaf654817a,Namespace:calico-system,Attempt:1,}" Nov 8 00:22:37.577439 systemd[1]: run-netns-cni\x2dd7eaaeb7\x2d7b7f\x2d14e3\x2d6675\x2de8ba309b3cc8.mount: Deactivated successfully. Nov 8 00:22:37.647776 containerd[1469]: time="2025-11-08T00:22:37.646417513Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:37.647776 containerd[1469]: time="2025-11-08T00:22:37.647240381Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:22:37.647776 containerd[1469]: time="2025-11-08T00:22:37.647351281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:22:37.649354 kubelet[2509]: E1108 00:22:37.647579 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:22:37.649354 kubelet[2509]: E1108 00:22:37.647643 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:22:37.649354 kubelet[2509]: E1108 00:22:37.647837 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jk5pt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7jxzl_calico-system(3fc490ed-6d34-41fd-bb44-ba621857b51e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:37.649354 kubelet[2509]: E1108 00:22:37.649269 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7jxzl" podUID="3fc490ed-6d34-41fd-bb44-ba621857b51e" Nov 8 00:22:37.783498 systemd-networkd[1350]: calicbd12fc0c0d: Link UP Nov 8 00:22:37.785617 systemd-networkd[1350]: calicbd12fc0c0d: Gained carrier Nov 8 00:22:37.814511 containerd[1469]: 2025-11-08 00:22:37.652 [INFO][4105] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:22:37.814511 containerd[1469]: 2025-11-08 00:22:37.670 [INFO][4105] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0 csi-node-driver- calico-system daa9f29d-2835-4e9f-8181-7aeaf654817a 960 0 2025-11-08 00:22:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-6d313a6df2 csi-node-driver-7q5q2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicbd12fc0c0d [] [] }} ContainerID="1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7" Namespace="calico-system" Pod="csi-node-driver-7q5q2" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-" Nov 8 00:22:37.814511 containerd[1469]: 2025-11-08 00:22:37.670 [INFO][4105] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7" Namespace="calico-system" Pod="csi-node-driver-7q5q2" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0" Nov 8 00:22:37.814511 containerd[1469]: 2025-11-08 00:22:37.716 [INFO][4117] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7" HandleID="k8s-pod-network.1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7" Workload="ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0" Nov 8 00:22:37.814511 containerd[1469]: 2025-11-08 00:22:37.716 [INFO][4117] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7" HandleID="k8s-pod-network.1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7" Workload="ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f960), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-6d313a6df2", "pod":"csi-node-driver-7q5q2", "timestamp":"2025-11-08 00:22:37.716164088 +0000 UTC"}, Hostname:"ci-4081.3.6-n-6d313a6df2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:22:37.814511 containerd[1469]: 2025-11-08 00:22:37.716 [INFO][4117] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:37.814511 containerd[1469]: 2025-11-08 00:22:37.716 [INFO][4117] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:37.814511 containerd[1469]: 2025-11-08 00:22:37.716 [INFO][4117] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-6d313a6df2' Nov 8 00:22:37.814511 containerd[1469]: 2025-11-08 00:22:37.726 [INFO][4117] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:37.814511 containerd[1469]: 2025-11-08 00:22:37.733 [INFO][4117] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:37.814511 containerd[1469]: 2025-11-08 00:22:37.744 [INFO][4117] ipam/ipam.go 511: Trying affinity for 192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:37.814511 containerd[1469]: 2025-11-08 00:22:37.747 [INFO][4117] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:37.814511 containerd[1469]: 2025-11-08 00:22:37.753 [INFO][4117] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:37.814511 containerd[1469]: 2025-11-08 00:22:37.753 [INFO][4117] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:37.814511 containerd[1469]: 2025-11-08 00:22:37.756 [INFO][4117] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7 Nov 8 00:22:37.814511 containerd[1469]: 2025-11-08 00:22:37.763 [INFO][4117] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:37.814511 containerd[1469]: 2025-11-08 00:22:37.771 [INFO][4117] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.195/26] block=192.168.45.192/26 handle="k8s-pod-network.1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:37.814511 containerd[1469]: 2025-11-08 00:22:37.771 [INFO][4117] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.195/26] handle="k8s-pod-network.1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:37.814511 containerd[1469]: 2025-11-08 00:22:37.771 [INFO][4117] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:37.814511 containerd[1469]: 2025-11-08 00:22:37.771 [INFO][4117] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.195/26] IPv6=[] ContainerID="1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7" HandleID="k8s-pod-network.1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7" Workload="ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0" Nov 8 00:22:37.815547 containerd[1469]: 2025-11-08 00:22:37.775 [INFO][4105] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7" Namespace="calico-system" Pod="csi-node-driver-7q5q2" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"daa9f29d-2835-4e9f-8181-7aeaf654817a", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"", Pod:"csi-node-driver-7q5q2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicbd12fc0c0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:37.815547 containerd[1469]: 2025-11-08 00:22:37.776 [INFO][4105] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.195/32] ContainerID="1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7" Namespace="calico-system" Pod="csi-node-driver-7q5q2" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0" Nov 8 00:22:37.815547 containerd[1469]: 2025-11-08 00:22:37.776 [INFO][4105] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicbd12fc0c0d ContainerID="1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7" Namespace="calico-system" Pod="csi-node-driver-7q5q2" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0" Nov 8 00:22:37.815547 containerd[1469]: 2025-11-08 00:22:37.787 [INFO][4105] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7" Namespace="calico-system" Pod="csi-node-driver-7q5q2" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0" Nov 8 00:22:37.815547 containerd[1469]: 2025-11-08 00:22:37.788 [INFO][4105] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7" Namespace="calico-system" Pod="csi-node-driver-7q5q2" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"daa9f29d-2835-4e9f-8181-7aeaf654817a", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7", Pod:"csi-node-driver-7q5q2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicbd12fc0c0d", MAC:"62:30:4e:eb:cd:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:37.815547 containerd[1469]: 2025-11-08 00:22:37.809 [INFO][4105] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7" Namespace="calico-system" Pod="csi-node-driver-7q5q2" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0" Nov 8 00:22:37.849452 containerd[1469]: time="2025-11-08T00:22:37.848877186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:22:37.849452 containerd[1469]: time="2025-11-08T00:22:37.848985147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:22:37.849452 containerd[1469]: time="2025-11-08T00:22:37.849010355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:37.850086 containerd[1469]: time="2025-11-08T00:22:37.849767202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:37.906149 systemd[1]: Started cri-containerd-1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7.scope - libcontainer container 1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7. Nov 8 00:22:37.914064 kubelet[2509]: E1108 00:22:37.914005 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7jxzl" podUID="3fc490ed-6d34-41fd-bb44-ba621857b51e" Nov 8 00:22:37.985791 containerd[1469]: time="2025-11-08T00:22:37.985689084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7q5q2,Uid:daa9f29d-2835-4e9f-8181-7aeaf654817a,Namespace:calico-system,Attempt:1,} returns sandbox id \"1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7\"" Nov 8 00:22:37.988907 containerd[1469]: time="2025-11-08T00:22:37.988870845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:22:38.044535 systemd-networkd[1350]: calif1a81badbf4: Gained IPv6LL Nov 8 00:22:38.315641 containerd[1469]: time="2025-11-08T00:22:38.315419850Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:38.316740 containerd[1469]: time="2025-11-08T00:22:38.316593887Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:22:38.316740 containerd[1469]: time="2025-11-08T00:22:38.316660645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:22:38.317374 kubelet[2509]: E1108 00:22:38.317291 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:22:38.317525 kubelet[2509]: E1108 00:22:38.317378 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:22:38.317604 kubelet[2509]: E1108 00:22:38.317554 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4l7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7q5q2_calico-system(daa9f29d-2835-4e9f-8181-7aeaf654817a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:38.321332 containerd[1469]: time="2025-11-08T00:22:38.321242498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:22:38.375332 containerd[1469]: time="2025-11-08T00:22:38.375278286Z" level=info msg="StopPodSandbox for \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\"" Nov 8 00:22:38.537752 containerd[1469]: 2025-11-08 00:22:38.468 [INFO][4190] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Nov 8 00:22:38.537752 containerd[1469]: 2025-11-08 00:22:38.468 [INFO][4190] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" iface="eth0" netns="/var/run/netns/cni-a17e845c-4fa3-7e05-88ec-95268d09cc15" Nov 8 00:22:38.537752 containerd[1469]: 2025-11-08 00:22:38.469 [INFO][4190] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" iface="eth0" netns="/var/run/netns/cni-a17e845c-4fa3-7e05-88ec-95268d09cc15" Nov 8 00:22:38.537752 containerd[1469]: 2025-11-08 00:22:38.470 [INFO][4190] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" iface="eth0" netns="/var/run/netns/cni-a17e845c-4fa3-7e05-88ec-95268d09cc15" Nov 8 00:22:38.537752 containerd[1469]: 2025-11-08 00:22:38.470 [INFO][4190] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Nov 8 00:22:38.537752 containerd[1469]: 2025-11-08 00:22:38.470 [INFO][4190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Nov 8 00:22:38.537752 containerd[1469]: 2025-11-08 00:22:38.514 [INFO][4201] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" HandleID="k8s-pod-network.827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0" Nov 8 00:22:38.537752 containerd[1469]: 2025-11-08 00:22:38.514 [INFO][4201] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:38.537752 containerd[1469]: 2025-11-08 00:22:38.514 [INFO][4201] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:38.537752 containerd[1469]: 2025-11-08 00:22:38.524 [WARNING][4201] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" HandleID="k8s-pod-network.827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0" Nov 8 00:22:38.537752 containerd[1469]: 2025-11-08 00:22:38.524 [INFO][4201] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" HandleID="k8s-pod-network.827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0" Nov 8 00:22:38.537752 containerd[1469]: 2025-11-08 00:22:38.528 [INFO][4201] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:38.537752 containerd[1469]: 2025-11-08 00:22:38.531 [INFO][4190] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Nov 8 00:22:38.538907 containerd[1469]: time="2025-11-08T00:22:38.538500377Z" level=info msg="TearDown network for sandbox \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\" successfully" Nov 8 00:22:38.538907 containerd[1469]: time="2025-11-08T00:22:38.538562279Z" level=info msg="StopPodSandbox for \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\" returns successfully" Nov 8 00:22:38.539771 containerd[1469]: time="2025-11-08T00:22:38.539730687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58dd75b54-s7bcs,Uid:31b491a3-55cf-4c4e-922c-621192b0de8f,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:22:38.581460 kubelet[2509]: I1108 00:22:38.581153 2509 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:22:38.585236 kubelet[2509]: E1108 00:22:38.584573 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:38.587036 systemd[1]: run-containerd-runc-k8s.io-1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7-runc.Km8s2T.mount: Deactivated successfully. Nov 8 00:22:38.587301 systemd[1]: run-netns-cni\x2da17e845c\x2d4fa3\x2d7e05\x2d88ec\x2d95268d09cc15.mount: Deactivated successfully. Nov 8 00:22:38.687368 containerd[1469]: time="2025-11-08T00:22:38.686721530Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:38.688572 containerd[1469]: time="2025-11-08T00:22:38.688406712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:22:38.688572 containerd[1469]: time="2025-11-08T00:22:38.688383230Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:22:38.690359 kubelet[2509]: E1108 00:22:38.689747 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:22:38.690359 kubelet[2509]: E1108 00:22:38.689829 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:22:38.690359 kubelet[2509]: E1108 00:22:38.690070 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4l7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7q5q2_calico-system(daa9f29d-2835-4e9f-8181-7aeaf654817a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:38.691708 kubelet[2509]: E1108 00:22:38.691349 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7q5q2" podUID="daa9f29d-2835-4e9f-8181-7aeaf654817a" Nov 8 00:22:38.866242 systemd-networkd[1350]: cali2cda4c7db43: Link UP Nov 8 00:22:38.872349 systemd-networkd[1350]: cali2cda4c7db43: Gained carrier Nov 8 00:22:38.901403 containerd[1469]: 2025-11-08 00:22:38.656 [INFO][4215] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:22:38.901403 containerd[1469]: 2025-11-08 00:22:38.699 [INFO][4215] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0 calico-apiserver-58dd75b54- calico-apiserver 31b491a3-55cf-4c4e-922c-621192b0de8f 976 0 2025-11-08 00:22:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58dd75b54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-6d313a6df2 calico-apiserver-58dd75b54-s7bcs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2cda4c7db43 [] [] }} ContainerID="b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4" Namespace="calico-apiserver" Pod="calico-apiserver-58dd75b54-s7bcs" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-" Nov 8 00:22:38.901403 containerd[1469]: 2025-11-08 00:22:38.699 [INFO][4215] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4" Namespace="calico-apiserver" Pod="calico-apiserver-58dd75b54-s7bcs" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0" Nov 8 00:22:38.901403 containerd[1469]: 2025-11-08 00:22:38.786 [INFO][4232] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4" HandleID="k8s-pod-network.b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0" Nov 8 00:22:38.901403 containerd[1469]: 2025-11-08 00:22:38.787 [INFO][4232] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4" HandleID="k8s-pod-network.b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5c80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-6d313a6df2", "pod":"calico-apiserver-58dd75b54-s7bcs", "timestamp":"2025-11-08 00:22:38.786435885 +0000 UTC"}, Hostname:"ci-4081.3.6-n-6d313a6df2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:22:38.901403 containerd[1469]: 2025-11-08 00:22:38.788 [INFO][4232] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:38.901403 containerd[1469]: 2025-11-08 00:22:38.788 [INFO][4232] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:38.901403 containerd[1469]: 2025-11-08 00:22:38.788 [INFO][4232] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-6d313a6df2' Nov 8 00:22:38.901403 containerd[1469]: 2025-11-08 00:22:38.798 [INFO][4232] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:38.901403 containerd[1469]: 2025-11-08 00:22:38.810 [INFO][4232] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:38.901403 containerd[1469]: 2025-11-08 00:22:38.819 [INFO][4232] ipam/ipam.go 511: Trying affinity for 192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:38.901403 containerd[1469]: 2025-11-08 00:22:38.824 [INFO][4232] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:38.901403 containerd[1469]: 2025-11-08 00:22:38.829 [INFO][4232] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:38.901403 containerd[1469]: 2025-11-08 00:22:38.830 [INFO][4232] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:38.901403 containerd[1469]: 2025-11-08 00:22:38.833 [INFO][4232] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4 Nov 8 00:22:38.901403 containerd[1469]: 2025-11-08 00:22:38.841 [INFO][4232] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:38.901403 containerd[1469]: 2025-11-08 00:22:38.851 [INFO][4232] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.196/26] block=192.168.45.192/26 handle="k8s-pod-network.b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:38.901403 containerd[1469]: 2025-11-08 00:22:38.852 [INFO][4232] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.196/26] handle="k8s-pod-network.b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:38.901403 containerd[1469]: 2025-11-08 00:22:38.852 [INFO][4232] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:38.901403 containerd[1469]: 2025-11-08 00:22:38.852 [INFO][4232] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.196/26] IPv6=[] ContainerID="b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4" HandleID="k8s-pod-network.b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0" Nov 8 00:22:38.905563 containerd[1469]: 2025-11-08 00:22:38.858 [INFO][4215] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4" Namespace="calico-apiserver" Pod="calico-apiserver-58dd75b54-s7bcs" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0", GenerateName:"calico-apiserver-58dd75b54-", Namespace:"calico-apiserver", SelfLink:"", UID:"31b491a3-55cf-4c4e-922c-621192b0de8f", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58dd75b54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"", Pod:"calico-apiserver-58dd75b54-s7bcs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cda4c7db43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:38.905563 containerd[1469]: 2025-11-08 00:22:38.858 [INFO][4215] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.196/32] ContainerID="b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4" Namespace="calico-apiserver" Pod="calico-apiserver-58dd75b54-s7bcs" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0" Nov 8 00:22:38.905563 containerd[1469]: 2025-11-08 00:22:38.858 [INFO][4215] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2cda4c7db43 ContainerID="b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4" Namespace="calico-apiserver" Pod="calico-apiserver-58dd75b54-s7bcs" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0" Nov 8 00:22:38.905563 containerd[1469]: 2025-11-08 00:22:38.870 [INFO][4215] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4" Namespace="calico-apiserver" Pod="calico-apiserver-58dd75b54-s7bcs" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0" Nov 8 00:22:38.905563 containerd[1469]: 2025-11-08 00:22:38.875 [INFO][4215] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4" Namespace="calico-apiserver" Pod="calico-apiserver-58dd75b54-s7bcs" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0", GenerateName:"calico-apiserver-58dd75b54-", Namespace:"calico-apiserver", SelfLink:"", UID:"31b491a3-55cf-4c4e-922c-621192b0de8f", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58dd75b54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4", Pod:"calico-apiserver-58dd75b54-s7bcs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cda4c7db43", MAC:"72:d7:66:c3:bc:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:38.905563 containerd[1469]: 2025-11-08 00:22:38.895 [INFO][4215] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4" Namespace="calico-apiserver" Pod="calico-apiserver-58dd75b54-s7bcs" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0" Nov 8 00:22:38.924270 kubelet[2509]: E1108 00:22:38.924094 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:38.930654 kubelet[2509]: E1108 00:22:38.930351 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7jxzl" podUID="3fc490ed-6d34-41fd-bb44-ba621857b51e" Nov 8 00:22:38.935303 kubelet[2509]: E1108 00:22:38.935023 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7q5q2" podUID="daa9f29d-2835-4e9f-8181-7aeaf654817a" Nov 8 00:22:38.968889 containerd[1469]: time="2025-11-08T00:22:38.968089630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:22:38.968889 containerd[1469]: time="2025-11-08T00:22:38.968183889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:22:38.970223 containerd[1469]: time="2025-11-08T00:22:38.968774189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:38.974500 containerd[1469]: time="2025-11-08T00:22:38.970273159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:39.044010 systemd[1]: Started cri-containerd-b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4.scope - libcontainer container b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4. Nov 8 00:22:39.268972 containerd[1469]: time="2025-11-08T00:22:39.268826400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58dd75b54-s7bcs,Uid:31b491a3-55cf-4c4e-922c-621192b0de8f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4\"" Nov 8 00:22:39.278926 containerd[1469]: time="2025-11-08T00:22:39.277798429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:22:39.325366 systemd-networkd[1350]: calicbd12fc0c0d: Gained IPv6LL Nov 8 00:22:39.365787 containerd[1469]: time="2025-11-08T00:22:39.365229300Z" level=info msg="StopPodSandbox for \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\"" Nov 8 00:22:39.367933 containerd[1469]: time="2025-11-08T00:22:39.367870005Z" level=info msg="StopPodSandbox for \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\"" Nov 8 00:22:39.631361 containerd[1469]: 2025-11-08 00:22:39.503 [INFO][4327] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Nov 8 00:22:39.631361 containerd[1469]: 2025-11-08 00:22:39.503 [INFO][4327] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" iface="eth0" netns="/var/run/netns/cni-7dc79d59-9d3a-fc30-0603-1f730a79ae95" Nov 8 00:22:39.631361 containerd[1469]: 2025-11-08 00:22:39.505 [INFO][4327] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" iface="eth0" netns="/var/run/netns/cni-7dc79d59-9d3a-fc30-0603-1f730a79ae95" Nov 8 00:22:39.631361 containerd[1469]: 2025-11-08 00:22:39.507 [INFO][4327] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" iface="eth0" netns="/var/run/netns/cni-7dc79d59-9d3a-fc30-0603-1f730a79ae95" Nov 8 00:22:39.631361 containerd[1469]: 2025-11-08 00:22:39.507 [INFO][4327] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Nov 8 00:22:39.631361 containerd[1469]: 2025-11-08 00:22:39.507 [INFO][4327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Nov 8 00:22:39.631361 containerd[1469]: 2025-11-08 00:22:39.594 [INFO][4347] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" HandleID="k8s-pod-network.82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0" Nov 8 00:22:39.631361 containerd[1469]: 2025-11-08 00:22:39.594 [INFO][4347] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:39.631361 containerd[1469]: 2025-11-08 00:22:39.594 [INFO][4347] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:39.631361 containerd[1469]: 2025-11-08 00:22:39.611 [WARNING][4347] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" HandleID="k8s-pod-network.82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0" Nov 8 00:22:39.631361 containerd[1469]: 2025-11-08 00:22:39.611 [INFO][4347] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" HandleID="k8s-pod-network.82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0" Nov 8 00:22:39.631361 containerd[1469]: 2025-11-08 00:22:39.614 [INFO][4347] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:39.631361 containerd[1469]: 2025-11-08 00:22:39.627 [INFO][4327] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Nov 8 00:22:39.634933 containerd[1469]: time="2025-11-08T00:22:39.633648839Z" level=info msg="TearDown network for sandbox \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\" successfully" Nov 8 00:22:39.634933 containerd[1469]: time="2025-11-08T00:22:39.633714635Z" level=info msg="StopPodSandbox for \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\" returns successfully" Nov 8 00:22:39.639144 systemd[1]: run-netns-cni\x2d7dc79d59\x2d9d3a\x2dfc30\x2d0603\x2d1f730a79ae95.mount: Deactivated successfully. Nov 8 00:22:39.644277 containerd[1469]: time="2025-11-08T00:22:39.643747223Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:39.644277 containerd[1469]: time="2025-11-08T00:22:39.643904504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58dd75b54-57vp2,Uid:42409f77-f298-4938-9e62-f71427e3d95e,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:22:39.646145 containerd[1469]: time="2025-11-08T00:22:39.646058409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:22:39.646888 containerd[1469]: time="2025-11-08T00:22:39.646629870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:22:39.648246 kubelet[2509]: E1108 00:22:39.647002 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:39.648246 kubelet[2509]: E1108 00:22:39.647111 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:39.659811 kubelet[2509]: E1108 00:22:39.659431 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp8zh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58dd75b54-s7bcs_calico-apiserver(31b491a3-55cf-4c4e-922c-621192b0de8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:39.662672 kubelet[2509]: E1108 00:22:39.662581 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58dd75b54-s7bcs" podUID="31b491a3-55cf-4c4e-922c-621192b0de8f" Nov 8 00:22:39.726453 containerd[1469]: 2025-11-08 00:22:39.528 [INFO][4335] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Nov 8 00:22:39.726453 containerd[1469]: 2025-11-08 00:22:39.529 [INFO][4335] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" iface="eth0" netns="/var/run/netns/cni-2b28176e-bcf3-d93f-f8fc-27f901c10fc3" Nov 8 00:22:39.726453 containerd[1469]: 2025-11-08 00:22:39.535 [INFO][4335] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" iface="eth0" netns="/var/run/netns/cni-2b28176e-bcf3-d93f-f8fc-27f901c10fc3" Nov 8 00:22:39.726453 containerd[1469]: 2025-11-08 00:22:39.540 [INFO][4335] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" iface="eth0" netns="/var/run/netns/cni-2b28176e-bcf3-d93f-f8fc-27f901c10fc3" Nov 8 00:22:39.726453 containerd[1469]: 2025-11-08 00:22:39.541 [INFO][4335] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Nov 8 00:22:39.726453 containerd[1469]: 2025-11-08 00:22:39.541 [INFO][4335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Nov 8 00:22:39.726453 containerd[1469]: 2025-11-08 00:22:39.647 [INFO][4352] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" HandleID="k8s-pod-network.2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0" Nov 8 00:22:39.726453 containerd[1469]: 2025-11-08 00:22:39.647 [INFO][4352] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:39.726453 containerd[1469]: 2025-11-08 00:22:39.647 [INFO][4352] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:39.726453 containerd[1469]: 2025-11-08 00:22:39.677 [WARNING][4352] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" HandleID="k8s-pod-network.2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0" Nov 8 00:22:39.726453 containerd[1469]: 2025-11-08 00:22:39.677 [INFO][4352] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" HandleID="k8s-pod-network.2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0" Nov 8 00:22:39.726453 containerd[1469]: 2025-11-08 00:22:39.682 [INFO][4352] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:39.726453 containerd[1469]: 2025-11-08 00:22:39.696 [INFO][4335] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Nov 8 00:22:39.730415 containerd[1469]: time="2025-11-08T00:22:39.729312397Z" level=info msg="TearDown network for sandbox \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\" successfully" Nov 8 00:22:39.730415 containerd[1469]: time="2025-11-08T00:22:39.729369437Z" level=info msg="StopPodSandbox for \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\" returns successfully" Nov 8 00:22:39.735114 systemd[1]: run-netns-cni\x2d2b28176e\x2dbcf3\x2dd93f\x2df8fc\x2d27f901c10fc3.mount: Deactivated successfully. Nov 8 00:22:39.743014 containerd[1469]: time="2025-11-08T00:22:39.742237852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c57565dbd-rlbqk,Uid:560dd5bf-b92f-472c-9028-b374dabf58bb,Namespace:calico-system,Attempt:1,}" Nov 8 00:22:39.871326 kernel: bpftool[4398]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:22:39.947964 kubelet[2509]: E1108 00:22:39.947777 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58dd75b54-s7bcs" podUID="31b491a3-55cf-4c4e-922c-621192b0de8f" Nov 8 00:22:39.955776 kubelet[2509]: E1108 00:22:39.954448 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7q5q2" podUID="daa9f29d-2835-4e9f-8181-7aeaf654817a" Nov 8 00:22:40.108709 systemd[1]: Started sshd@8-24.199.105.232:22-139.178.68.195:57230.service - OpenSSH per-connection server daemon (139.178.68.195:57230). Nov 8 00:22:40.317628 systemd-networkd[1350]: cali80bc9b8febb: Link UP Nov 8 00:22:40.323527 systemd-networkd[1350]: cali80bc9b8febb: Gained carrier Nov 8 00:22:40.374893 sshd[4415]: Accepted publickey for core from 139.178.68.195 port 57230 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:40.382150 containerd[1469]: time="2025-11-08T00:22:40.382090430Z" level=info msg="StopPodSandbox for \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\"" Nov 8 00:22:40.389306 sshd[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:40.396381 containerd[1469]: 2025-11-08 00:22:39.865 [INFO][4383] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0 calico-kube-controllers-c57565dbd- calico-system 560dd5bf-b92f-472c-9028-b374dabf58bb 1033 0 2025-11-08 00:22:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c57565dbd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-6d313a6df2 calico-kube-controllers-c57565dbd-rlbqk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali80bc9b8febb [] [] }} ContainerID="34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11" Namespace="calico-system" Pod="calico-kube-controllers-c57565dbd-rlbqk" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-" Nov 8 00:22:40.396381 containerd[1469]: 2025-11-08 00:22:39.866 [INFO][4383] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11" Namespace="calico-system" Pod="calico-kube-controllers-c57565dbd-rlbqk" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0" Nov 8 00:22:40.396381 containerd[1469]: 2025-11-08 00:22:40.054 [INFO][4401] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11" HandleID="k8s-pod-network.34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0" Nov 8 00:22:40.396381 containerd[1469]: 2025-11-08 00:22:40.055 [INFO][4401] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11" HandleID="k8s-pod-network.34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b3bc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-6d313a6df2", "pod":"calico-kube-controllers-c57565dbd-rlbqk", "timestamp":"2025-11-08 00:22:40.05486009 +0000 UTC"}, Hostname:"ci-4081.3.6-n-6d313a6df2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:22:40.396381 containerd[1469]: 2025-11-08 00:22:40.055 [INFO][4401] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:40.396381 containerd[1469]: 2025-11-08 00:22:40.057 [INFO][4401] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:40.396381 containerd[1469]: 2025-11-08 00:22:40.057 [INFO][4401] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-6d313a6df2' Nov 8 00:22:40.396381 containerd[1469]: 2025-11-08 00:22:40.107 [INFO][4401] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:40.396381 containerd[1469]: 2025-11-08 00:22:40.163 [INFO][4401] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:40.396381 containerd[1469]: 2025-11-08 00:22:40.208 [INFO][4401] ipam/ipam.go 511: Trying affinity for 192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:40.396381 containerd[1469]: 2025-11-08 00:22:40.214 [INFO][4401] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:40.396381 containerd[1469]: 2025-11-08 00:22:40.224 [INFO][4401] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:40.396381 containerd[1469]: 2025-11-08 00:22:40.225 [INFO][4401] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:40.396381 containerd[1469]: 2025-11-08 00:22:40.231 [INFO][4401] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11 Nov 8 00:22:40.396381 containerd[1469]: 2025-11-08 00:22:40.240 [INFO][4401] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:40.396381 containerd[1469]: 2025-11-08 00:22:40.270 [INFO][4401] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.197/26] block=192.168.45.192/26 handle="k8s-pod-network.34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:40.396381 containerd[1469]: 2025-11-08 00:22:40.271 [INFO][4401] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.197/26] handle="k8s-pod-network.34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:40.396381 containerd[1469]: 2025-11-08 00:22:40.271 [INFO][4401] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:40.396381 containerd[1469]: 2025-11-08 00:22:40.272 [INFO][4401] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.197/26] IPv6=[] ContainerID="34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11" HandleID="k8s-pod-network.34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0" Nov 8 00:22:40.399767 containerd[1469]: 2025-11-08 00:22:40.289 [INFO][4383] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11" Namespace="calico-system" Pod="calico-kube-controllers-c57565dbd-rlbqk" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0", GenerateName:"calico-kube-controllers-c57565dbd-", Namespace:"calico-system", SelfLink:"", UID:"560dd5bf-b92f-472c-9028-b374dabf58bb", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c57565dbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"", Pod:"calico-kube-controllers-c57565dbd-rlbqk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80bc9b8febb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:40.399767 containerd[1469]: 2025-11-08 00:22:40.289 [INFO][4383] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.197/32] ContainerID="34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11" Namespace="calico-system" Pod="calico-kube-controllers-c57565dbd-rlbqk" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0" Nov 8 00:22:40.399767 containerd[1469]: 2025-11-08 00:22:40.289 [INFO][4383] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80bc9b8febb ContainerID="34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11" Namespace="calico-system" Pod="calico-kube-controllers-c57565dbd-rlbqk" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0" Nov 8 00:22:40.399767 containerd[1469]: 2025-11-08 00:22:40.321 [INFO][4383] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11" Namespace="calico-system" Pod="calico-kube-controllers-c57565dbd-rlbqk" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0" Nov 8 00:22:40.399767 containerd[1469]: 2025-11-08 00:22:40.324 [INFO][4383] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11" Namespace="calico-system" Pod="calico-kube-controllers-c57565dbd-rlbqk" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0", GenerateName:"calico-kube-controllers-c57565dbd-", Namespace:"calico-system", SelfLink:"", UID:"560dd5bf-b92f-472c-9028-b374dabf58bb", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c57565dbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11", Pod:"calico-kube-controllers-c57565dbd-rlbqk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80bc9b8febb", MAC:"a2:0c:9a:9c:4c:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:40.399767 containerd[1469]: 2025-11-08 00:22:40.369 [INFO][4383] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11" Namespace="calico-system" Pod="calico-kube-controllers-c57565dbd-rlbqk" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0" Nov 8 00:22:40.413572 systemd-logind[1444]: New session 8 of user core. Nov 8 00:22:40.416520 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:22:40.559363 systemd-networkd[1350]: cali53e9612fd5a: Link UP Nov 8 00:22:40.568652 systemd-networkd[1350]: cali53e9612fd5a: Gained carrier Nov 8 00:22:40.573450 containerd[1469]: time="2025-11-08T00:22:40.573252674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:22:40.573450 containerd[1469]: time="2025-11-08T00:22:40.573356952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:22:40.573450 containerd[1469]: time="2025-11-08T00:22:40.573376165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:40.574268 containerd[1469]: time="2025-11-08T00:22:40.573516673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:40.675675 containerd[1469]: 2025-11-08 00:22:39.920 [INFO][4375] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0 calico-apiserver-58dd75b54- calico-apiserver 42409f77-f298-4938-9e62-f71427e3d95e 1032 0 2025-11-08 00:22:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58dd75b54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-6d313a6df2 calico-apiserver-58dd75b54-57vp2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali53e9612fd5a [] [] }} ContainerID="5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d" Namespace="calico-apiserver" Pod="calico-apiserver-58dd75b54-57vp2" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-" Nov 8 00:22:40.675675 containerd[1469]: 2025-11-08 00:22:39.922 [INFO][4375] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d" Namespace="calico-apiserver" Pod="calico-apiserver-58dd75b54-57vp2" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0" Nov 8 00:22:40.675675 containerd[1469]: 2025-11-08 00:22:40.075 [INFO][4407] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d" HandleID="k8s-pod-network.5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0" Nov 8 00:22:40.675675 containerd[1469]: 2025-11-08 00:22:40.076 [INFO][4407] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d" HandleID="k8s-pod-network.5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a850), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-6d313a6df2", "pod":"calico-apiserver-58dd75b54-57vp2", "timestamp":"2025-11-08 00:22:40.07511085 +0000 UTC"}, Hostname:"ci-4081.3.6-n-6d313a6df2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:22:40.675675 containerd[1469]: 2025-11-08 00:22:40.076 [INFO][4407] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:40.675675 containerd[1469]: 2025-11-08 00:22:40.271 [INFO][4407] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:40.675675 containerd[1469]: 2025-11-08 00:22:40.272 [INFO][4407] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-6d313a6df2' Nov 8 00:22:40.675675 containerd[1469]: 2025-11-08 00:22:40.336 [INFO][4407] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:40.675675 containerd[1469]: 2025-11-08 00:22:40.363 [INFO][4407] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:40.675675 containerd[1469]: 2025-11-08 00:22:40.404 [INFO][4407] ipam/ipam.go 511: Trying affinity for 192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:40.675675 containerd[1469]: 2025-11-08 00:22:40.420 [INFO][4407] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:40.675675 containerd[1469]: 2025-11-08 00:22:40.437 [INFO][4407] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:40.675675 containerd[1469]: 2025-11-08 00:22:40.437 [INFO][4407] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:40.675675 containerd[1469]: 2025-11-08 00:22:40.450 [INFO][4407] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d Nov 8 00:22:40.675675 containerd[1469]: 2025-11-08 00:22:40.465 [INFO][4407] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:40.675675 containerd[1469]: 2025-11-08 00:22:40.487 [INFO][4407] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.198/26] block=192.168.45.192/26 handle="k8s-pod-network.5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:40.675675 containerd[1469]: 2025-11-08 00:22:40.487 [INFO][4407] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.198/26] handle="k8s-pod-network.5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:40.675675 containerd[1469]: 2025-11-08 00:22:40.487 [INFO][4407] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:40.675675 containerd[1469]: 2025-11-08 00:22:40.488 [INFO][4407] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.198/26] IPv6=[] ContainerID="5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d" HandleID="k8s-pod-network.5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0" Nov 8 00:22:40.676756 containerd[1469]: 2025-11-08 00:22:40.508 [INFO][4375] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d" Namespace="calico-apiserver" Pod="calico-apiserver-58dd75b54-57vp2" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0", GenerateName:"calico-apiserver-58dd75b54-", Namespace:"calico-apiserver", SelfLink:"", UID:"42409f77-f298-4938-9e62-f71427e3d95e", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58dd75b54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"", Pod:"calico-apiserver-58dd75b54-57vp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali53e9612fd5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:40.676756 containerd[1469]: 2025-11-08 00:22:40.508 [INFO][4375] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.198/32] ContainerID="5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d" Namespace="calico-apiserver" Pod="calico-apiserver-58dd75b54-57vp2" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0" Nov 8 00:22:40.676756 containerd[1469]: 2025-11-08 00:22:40.508 [INFO][4375] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53e9612fd5a ContainerID="5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d" Namespace="calico-apiserver" Pod="calico-apiserver-58dd75b54-57vp2" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0" Nov 8 00:22:40.676756 containerd[1469]: 2025-11-08 00:22:40.587 [INFO][4375] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d" Namespace="calico-apiserver" Pod="calico-apiserver-58dd75b54-57vp2" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0" Nov 8 00:22:40.676756 containerd[1469]: 2025-11-08 00:22:40.595 [INFO][4375] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d" Namespace="calico-apiserver" Pod="calico-apiserver-58dd75b54-57vp2" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0", GenerateName:"calico-apiserver-58dd75b54-", Namespace:"calico-apiserver", SelfLink:"", UID:"42409f77-f298-4938-9e62-f71427e3d95e", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58dd75b54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d", Pod:"calico-apiserver-58dd75b54-57vp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali53e9612fd5a", MAC:"ce:42:fd:fe:09:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:40.676756 containerd[1469]: 2025-11-08 00:22:40.635 [INFO][4375] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d" Namespace="calico-apiserver" Pod="calico-apiserver-58dd75b54-57vp2" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0" Nov 8 00:22:40.697581 systemd[1]: Started cri-containerd-34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11.scope - libcontainer container 34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11. Nov 8 00:22:40.797340 systemd-networkd[1350]: cali2cda4c7db43: Gained IPv6LL Nov 8 00:22:40.897988 containerd[1469]: time="2025-11-08T00:22:40.894732784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:22:40.897988 containerd[1469]: time="2025-11-08T00:22:40.894879864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:22:40.897988 containerd[1469]: time="2025-11-08T00:22:40.894906762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:40.897988 containerd[1469]: time="2025-11-08T00:22:40.895101219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:40.968114 kubelet[2509]: E1108 00:22:40.966843 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58dd75b54-s7bcs" podUID="31b491a3-55cf-4c4e-922c-621192b0de8f" Nov 8 00:22:41.048240 systemd[1]: run-containerd-runc-k8s.io-5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d-runc.DCEN41.mount: Deactivated successfully. Nov 8 00:22:41.077520 systemd[1]: Started cri-containerd-5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d.scope - libcontainer container 5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d. Nov 8 00:22:41.260179 containerd[1469]: 2025-11-08 00:22:40.873 [INFO][4440] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Nov 8 00:22:41.260179 containerd[1469]: 2025-11-08 00:22:40.878 [INFO][4440] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" iface="eth0" netns="/var/run/netns/cni-6e1bd94f-7e9e-aea0-873c-532f37f239fb" Nov 8 00:22:41.260179 containerd[1469]: 2025-11-08 00:22:40.879 [INFO][4440] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" iface="eth0" netns="/var/run/netns/cni-6e1bd94f-7e9e-aea0-873c-532f37f239fb" Nov 8 00:22:41.260179 containerd[1469]: 2025-11-08 00:22:40.881 [INFO][4440] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" iface="eth0" netns="/var/run/netns/cni-6e1bd94f-7e9e-aea0-873c-532f37f239fb" Nov 8 00:22:41.260179 containerd[1469]: 2025-11-08 00:22:40.882 [INFO][4440] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Nov 8 00:22:41.260179 containerd[1469]: 2025-11-08 00:22:40.882 [INFO][4440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Nov 8 00:22:41.260179 containerd[1469]: 2025-11-08 00:22:41.115 [INFO][4505] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" HandleID="k8s-pod-network.65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0" Nov 8 00:22:41.260179 containerd[1469]: 2025-11-08 00:22:41.116 [INFO][4505] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:41.260179 containerd[1469]: 2025-11-08 00:22:41.116 [INFO][4505] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:41.260179 containerd[1469]: 2025-11-08 00:22:41.189 [WARNING][4505] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" HandleID="k8s-pod-network.65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0" Nov 8 00:22:41.260179 containerd[1469]: 2025-11-08 00:22:41.190 [INFO][4505] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" HandleID="k8s-pod-network.65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0" Nov 8 00:22:41.260179 containerd[1469]: 2025-11-08 00:22:41.242 [INFO][4505] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:41.260179 containerd[1469]: 2025-11-08 00:22:41.248 [INFO][4440] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Nov 8 00:22:41.263476 containerd[1469]: time="2025-11-08T00:22:41.263101819Z" level=info msg="TearDown network for sandbox \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\" successfully" Nov 8 00:22:41.263476 containerd[1469]: time="2025-11-08T00:22:41.263177693Z" level=info msg="StopPodSandbox for \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\" returns successfully" Nov 8 00:22:41.275474 kubelet[2509]: E1108 00:22:41.274777 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:41.279247 containerd[1469]: time="2025-11-08T00:22:41.279126354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rs7hj,Uid:f6d4dc87-9d2e-4afc-ab03-361e2e8d6f52,Namespace:kube-system,Attempt:1,}" Nov 8 00:22:41.375222 containerd[1469]: time="2025-11-08T00:22:41.374359481Z" level=info msg="StopPodSandbox for \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\"" Nov 8 00:22:41.653455 systemd[1]: run-netns-cni\x2d6e1bd94f\x2d7e9e\x2daea0\x2d873c\x2d532f37f239fb.mount: Deactivated successfully. Nov 8 00:22:41.706257 containerd[1469]: time="2025-11-08T00:22:41.705785739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c57565dbd-rlbqk,Uid:560dd5bf-b92f-472c-9028-b374dabf58bb,Namespace:calico-system,Attempt:1,} returns sandbox id \"34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11\"" Nov 8 00:22:41.713980 containerd[1469]: time="2025-11-08T00:22:41.713476904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:22:41.779920 sshd[4415]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:41.803668 systemd[1]: sshd@8-24.199.105.232:22-139.178.68.195:57230.service: Deactivated successfully. Nov 8 00:22:41.818163 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:22:41.825545 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:22:41.830690 systemd-logind[1444]: Removed session 8. Nov 8 00:22:42.050434 systemd-networkd[1350]: cali6654c8ac669: Link UP Nov 8 00:22:42.064409 systemd-networkd[1350]: cali6654c8ac669: Gained carrier Nov 8 00:22:42.092092 containerd[1469]: 2025-11-08 00:22:41.869 [INFO][4551] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Nov 8 00:22:42.092092 containerd[1469]: 2025-11-08 00:22:41.869 [INFO][4551] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" iface="eth0" netns="/var/run/netns/cni-b35dabd3-adbe-20c5-18b9-0dfe87d03e77" Nov 8 00:22:42.092092 containerd[1469]: 2025-11-08 00:22:41.871 [INFO][4551] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" iface="eth0" netns="/var/run/netns/cni-b35dabd3-adbe-20c5-18b9-0dfe87d03e77" Nov 8 00:22:42.092092 containerd[1469]: 2025-11-08 00:22:41.872 [INFO][4551] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" iface="eth0" netns="/var/run/netns/cni-b35dabd3-adbe-20c5-18b9-0dfe87d03e77" Nov 8 00:22:42.092092 containerd[1469]: 2025-11-08 00:22:41.872 [INFO][4551] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Nov 8 00:22:42.092092 containerd[1469]: 2025-11-08 00:22:41.872 [INFO][4551] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Nov 8 00:22:42.092092 containerd[1469]: 2025-11-08 00:22:41.964 [INFO][4577] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" HandleID="k8s-pod-network.02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0" Nov 8 00:22:42.092092 containerd[1469]: 2025-11-08 00:22:41.965 [INFO][4577] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:42.092092 containerd[1469]: 2025-11-08 00:22:42.016 [INFO][4577] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:42.092092 containerd[1469]: 2025-11-08 00:22:42.051 [WARNING][4577] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" HandleID="k8s-pod-network.02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0" Nov 8 00:22:42.092092 containerd[1469]: 2025-11-08 00:22:42.051 [INFO][4577] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" HandleID="k8s-pod-network.02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0" Nov 8 00:22:42.092092 containerd[1469]: 2025-11-08 00:22:42.058 [INFO][4577] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:42.092092 containerd[1469]: 2025-11-08 00:22:42.086 [INFO][4551] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Nov 8 00:22:42.098299 containerd[1469]: time="2025-11-08T00:22:42.096308184Z" level=info msg="TearDown network for sandbox \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\" successfully" Nov 8 00:22:42.098299 containerd[1469]: time="2025-11-08T00:22:42.097494961Z" level=info msg="StopPodSandbox for \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\" returns successfully" Nov 8 00:22:42.098174 systemd[1]: run-netns-cni\x2db35dabd3\x2dadbe\x2d20c5\x2d18b9\x2d0dfe87d03e77.mount: Deactivated successfully. Nov 8 00:22:42.106393 kubelet[2509]: E1108 00:22:42.103711 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:42.109043 containerd[1469]: time="2025-11-08T00:22:42.108033902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jjkjr,Uid:89a1f27b-cb85-45f6-a4b2-8e67e3f028ce,Namespace:kube-system,Attempt:1,}" Nov 8 00:22:42.116636 containerd[1469]: time="2025-11-08T00:22:42.116551484Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:42.133277 containerd[1469]: 2025-11-08 00:22:41.682 [INFO][4531] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0 coredns-668d6bf9bc- kube-system f6d4dc87-9d2e-4afc-ab03-361e2e8d6f52 1057 0 2025-11-08 00:21:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-6d313a6df2 coredns-668d6bf9bc-rs7hj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6654c8ac669 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs7hj" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-" Nov 8 00:22:42.133277 containerd[1469]: 2025-11-08 00:22:41.682 [INFO][4531] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs7hj" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0" Nov 8 00:22:42.133277 containerd[1469]: 2025-11-08 00:22:41.843 [INFO][4566] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5" HandleID="k8s-pod-network.cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0" Nov 8 00:22:42.133277 containerd[1469]: 2025-11-08 00:22:41.846 [INFO][4566] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5" HandleID="k8s-pod-network.cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000450df0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-6d313a6df2", "pod":"coredns-668d6bf9bc-rs7hj", "timestamp":"2025-11-08 00:22:41.843613962 +0000 UTC"}, Hostname:"ci-4081.3.6-n-6d313a6df2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:22:42.133277 containerd[1469]: 2025-11-08 00:22:41.847 [INFO][4566] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:42.133277 containerd[1469]: 2025-11-08 00:22:41.847 [INFO][4566] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:42.133277 containerd[1469]: 2025-11-08 00:22:41.847 [INFO][4566] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-6d313a6df2' Nov 8 00:22:42.133277 containerd[1469]: 2025-11-08 00:22:41.873 [INFO][4566] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:42.133277 containerd[1469]: 2025-11-08 00:22:41.905 [INFO][4566] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:42.133277 containerd[1469]: 2025-11-08 00:22:41.943 [INFO][4566] ipam/ipam.go 511: Trying affinity for 192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:42.133277 containerd[1469]: 2025-11-08 00:22:41.956 [INFO][4566] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:42.133277 containerd[1469]: 2025-11-08 00:22:41.979 [INFO][4566] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:42.133277 containerd[1469]: 2025-11-08 00:22:41.980 [INFO][4566] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:42.133277 containerd[1469]: 2025-11-08 00:22:41.984 [INFO][4566] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5 Nov 8 00:22:42.133277 containerd[1469]: 2025-11-08 00:22:41.996 [INFO][4566] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:42.133277 containerd[1469]: 2025-11-08 00:22:42.016 [INFO][4566] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.199/26] block=192.168.45.192/26 handle="k8s-pod-network.cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:42.133277 containerd[1469]: 2025-11-08 00:22:42.016 [INFO][4566] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.199/26] handle="k8s-pod-network.cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:42.133277 containerd[1469]: 2025-11-08 00:22:42.016 [INFO][4566] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:42.133277 containerd[1469]: 2025-11-08 00:22:42.016 [INFO][4566] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.199/26] IPv6=[] ContainerID="cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5" HandleID="k8s-pod-network.cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0" Nov 8 00:22:42.141076 containerd[1469]: 2025-11-08 00:22:42.023 [INFO][4531] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs7hj" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f6d4dc87-9d2e-4afc-ab03-361e2e8d6f52", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"", Pod:"coredns-668d6bf9bc-rs7hj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6654c8ac669", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:42.141076 containerd[1469]: 2025-11-08 00:22:42.024 [INFO][4531] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.199/32] ContainerID="cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs7hj" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0" Nov 8 00:22:42.141076 containerd[1469]: 2025-11-08 00:22:42.024 [INFO][4531] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6654c8ac669 ContainerID="cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs7hj" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0" Nov 8 00:22:42.141076 containerd[1469]: 2025-11-08 00:22:42.075 [INFO][4531] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs7hj" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0" Nov 8 00:22:42.141076 containerd[1469]: 2025-11-08 00:22:42.078 [INFO][4531] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs7hj" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f6d4dc87-9d2e-4afc-ab03-361e2e8d6f52", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5", Pod:"coredns-668d6bf9bc-rs7hj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6654c8ac669", MAC:"c6:01:36:75:49:d6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:42.141076 containerd[1469]: 2025-11-08 00:22:42.129 [INFO][4531] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5" Namespace="kube-system" Pod="coredns-668d6bf9bc-rs7hj" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0" Nov 8 00:22:42.141076 containerd[1469]: time="2025-11-08T00:22:42.140279220Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:22:42.139822 systemd-networkd[1350]: cali80bc9b8febb: Gained IPv6LL Nov 8 00:22:42.152022 kubelet[2509]: E1108 00:22:42.144097 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:22:42.152022 kubelet[2509]: E1108 00:22:42.144165 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:22:42.152022 kubelet[2509]: E1108 00:22:42.144362 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48n5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c57565dbd-rlbqk_calico-system(560dd5bf-b92f-472c-9028-b374dabf58bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:42.152022 kubelet[2509]: E1108 00:22:42.145571 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c57565dbd-rlbqk" podUID="560dd5bf-b92f-472c-9028-b374dabf58bb" Nov 8 00:22:42.152691 containerd[1469]: time="2025-11-08T00:22:42.147413663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:22:42.301110 containerd[1469]: time="2025-11-08T00:22:42.300169336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:22:42.301110 containerd[1469]: time="2025-11-08T00:22:42.300415906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:22:42.301110 containerd[1469]: time="2025-11-08T00:22:42.300477397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:42.303474 containerd[1469]: time="2025-11-08T00:22:42.301894315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:42.316893 containerd[1469]: time="2025-11-08T00:22:42.316609093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58dd75b54-57vp2,Uid:42409f77-f298-4938-9e62-f71427e3d95e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d\"" Nov 8 00:22:42.327872 containerd[1469]: time="2025-11-08T00:22:42.327012166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:22:42.409741 systemd[1]: Started cri-containerd-cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5.scope - libcontainer container cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5. Nov 8 00:22:42.588505 systemd-networkd[1350]: cali53e9612fd5a: Gained IPv6LL Nov 8 00:22:42.590178 containerd[1469]: time="2025-11-08T00:22:42.589917869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rs7hj,Uid:f6d4dc87-9d2e-4afc-ab03-361e2e8d6f52,Namespace:kube-system,Attempt:1,} returns sandbox id \"cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5\"" Nov 8 00:22:42.595950 kubelet[2509]: E1108 00:22:42.594812 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:42.611162 containerd[1469]: time="2025-11-08T00:22:42.609889999Z" level=info msg="CreateContainer within sandbox \"cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:22:42.644697 systemd[1]: run-containerd-runc-k8s.io-cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5-runc.OLmddW.mount: Deactivated successfully. Nov 8 00:22:42.685031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2812410143.mount: Deactivated successfully. Nov 8 00:22:42.688407 containerd[1469]: time="2025-11-08T00:22:42.685701675Z" level=info msg="CreateContainer within sandbox \"cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c1baa0d092e270a6e9d1d3ab9575689bb04a2ca91391745c415a32c4485662d6\"" Nov 8 00:22:42.695639 containerd[1469]: time="2025-11-08T00:22:42.695573527Z" level=info msg="StartContainer for \"c1baa0d092e270a6e9d1d3ab9575689bb04a2ca91391745c415a32c4485662d6\"" Nov 8 00:22:42.697269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2930347537.mount: Deactivated successfully. Nov 8 00:22:42.794559 systemd[1]: Started cri-containerd-c1baa0d092e270a6e9d1d3ab9575689bb04a2ca91391745c415a32c4485662d6.scope - libcontainer container c1baa0d092e270a6e9d1d3ab9575689bb04a2ca91391745c415a32c4485662d6. Nov 8 00:22:42.848454 systemd-networkd[1350]: califd48146bae3: Link UP Nov 8 00:22:42.848802 systemd-networkd[1350]: califd48146bae3: Gained carrier Nov 8 00:22:42.856122 containerd[1469]: time="2025-11-08T00:22:42.856046911Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:42.874414 containerd[1469]: time="2025-11-08T00:22:42.872830486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:22:42.874414 containerd[1469]: time="2025-11-08T00:22:42.873073702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:22:42.881513 kubelet[2509]: E1108 00:22:42.875573 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:42.881513 kubelet[2509]: E1108 00:22:42.880430 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:42.881513 kubelet[2509]: E1108 00:22:42.880699 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2ncw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58dd75b54-57vp2_calico-apiserver(42409f77-f298-4938-9e62-f71427e3d95e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:42.882945 kubelet[2509]: E1108 00:22:42.882575 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58dd75b54-57vp2" podUID="42409f77-f298-4938-9e62-f71427e3d95e" Nov 8 00:22:42.910816 containerd[1469]: 2025-11-08 00:22:42.476 [INFO][4594] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0 coredns-668d6bf9bc- kube-system 89a1f27b-cb85-45f6-a4b2-8e67e3f028ce 1077 0 2025-11-08 00:21:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-6d313a6df2 coredns-668d6bf9bc-jjkjr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califd48146bae3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2" Namespace="kube-system" Pod="coredns-668d6bf9bc-jjkjr" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-" Nov 8 00:22:42.910816 containerd[1469]: 2025-11-08 00:22:42.477 [INFO][4594] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2" Namespace="kube-system" Pod="coredns-668d6bf9bc-jjkjr" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0" Nov 8 00:22:42.910816 containerd[1469]: 2025-11-08 00:22:42.619 [INFO][4658] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2" HandleID="k8s-pod-network.bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0" Nov 8 00:22:42.910816 containerd[1469]: 2025-11-08 00:22:42.619 [INFO][4658] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2" HandleID="k8s-pod-network.bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf5d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-6d313a6df2", "pod":"coredns-668d6bf9bc-jjkjr", "timestamp":"2025-11-08 00:22:42.619231006 +0000 UTC"}, Hostname:"ci-4081.3.6-n-6d313a6df2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:22:42.910816 containerd[1469]: 2025-11-08 00:22:42.619 [INFO][4658] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:42.910816 containerd[1469]: 2025-11-08 00:22:42.619 [INFO][4658] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:42.910816 containerd[1469]: 2025-11-08 00:22:42.619 [INFO][4658] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-6d313a6df2' Nov 8 00:22:42.910816 containerd[1469]: 2025-11-08 00:22:42.665 [INFO][4658] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:42.910816 containerd[1469]: 2025-11-08 00:22:42.712 [INFO][4658] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:42.910816 containerd[1469]: 2025-11-08 00:22:42.739 [INFO][4658] ipam/ipam.go 511: Trying affinity for 192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:42.910816 containerd[1469]: 2025-11-08 00:22:42.765 [INFO][4658] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:42.910816 containerd[1469]: 2025-11-08 00:22:42.772 [INFO][4658] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:42.910816 containerd[1469]: 2025-11-08 00:22:42.773 [INFO][4658] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:42.910816 containerd[1469]: 2025-11-08 00:22:42.787 [INFO][4658] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2 Nov 8 00:22:42.910816 containerd[1469]: 2025-11-08 00:22:42.804 [INFO][4658] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:42.910816 containerd[1469]: 2025-11-08 00:22:42.822 [INFO][4658] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.200/26] block=192.168.45.192/26 handle="k8s-pod-network.bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:42.910816 containerd[1469]: 2025-11-08 00:22:42.823 [INFO][4658] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.200/26] handle="k8s-pod-network.bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2" host="ci-4081.3.6-n-6d313a6df2" Nov 8 00:22:42.910816 containerd[1469]: 2025-11-08 00:22:42.823 [INFO][4658] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:42.910816 containerd[1469]: 2025-11-08 00:22:42.823 [INFO][4658] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.200/26] IPv6=[] ContainerID="bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2" HandleID="k8s-pod-network.bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0" Nov 8 00:22:42.914083 containerd[1469]: 2025-11-08 00:22:42.834 [INFO][4594] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2" Namespace="kube-system" Pod="coredns-668d6bf9bc-jjkjr" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"89a1f27b-cb85-45f6-a4b2-8e67e3f028ce", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"", Pod:"coredns-668d6bf9bc-jjkjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd48146bae3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:42.914083 containerd[1469]: 2025-11-08 00:22:42.834 [INFO][4594] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.200/32] ContainerID="bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2" Namespace="kube-system" Pod="coredns-668d6bf9bc-jjkjr" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0" Nov 8 00:22:42.914083 containerd[1469]: 2025-11-08 00:22:42.836 [INFO][4594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd48146bae3 ContainerID="bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2" Namespace="kube-system" Pod="coredns-668d6bf9bc-jjkjr" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0" Nov 8 00:22:42.914083 containerd[1469]: 2025-11-08 00:22:42.851 [INFO][4594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2" Namespace="kube-system" Pod="coredns-668d6bf9bc-jjkjr" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0" Nov 8 00:22:42.914083 containerd[1469]: 2025-11-08 00:22:42.863 [INFO][4594] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2" Namespace="kube-system" Pod="coredns-668d6bf9bc-jjkjr" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"89a1f27b-cb85-45f6-a4b2-8e67e3f028ce", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2", Pod:"coredns-668d6bf9bc-jjkjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd48146bae3", MAC:"96:fb:29:64:56:fb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:42.914083 containerd[1469]: 2025-11-08 00:22:42.901 [INFO][4594] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2" Namespace="kube-system" Pod="coredns-668d6bf9bc-jjkjr" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0" Nov 8 00:22:42.960166 containerd[1469]: time="2025-11-08T00:22:42.959177126Z" level=info msg="StartContainer for \"c1baa0d092e270a6e9d1d3ab9575689bb04a2ca91391745c415a32c4485662d6\" returns successfully" Nov 8 00:22:43.005500 kubelet[2509]: E1108 00:22:43.003326 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58dd75b54-57vp2" podUID="42409f77-f298-4938-9e62-f71427e3d95e" Nov 8 00:22:43.025384 kubelet[2509]: E1108 00:22:43.022215 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:43.028324 kubelet[2509]: E1108 00:22:43.026435 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c57565dbd-rlbqk" podUID="560dd5bf-b92f-472c-9028-b374dabf58bb" Nov 8 00:22:43.028512 containerd[1469]: time="2025-11-08T00:22:43.027522691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:22:43.028512 containerd[1469]: time="2025-11-08T00:22:43.027950917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:22:43.028512 containerd[1469]: time="2025-11-08T00:22:43.028024139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:43.028691 containerd[1469]: time="2025-11-08T00:22:43.028614976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:22:43.108644 systemd[1]: Started cri-containerd-bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2.scope - libcontainer container bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2. Nov 8 00:22:43.211109 kubelet[2509]: I1108 00:22:43.211010 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rs7hj" podStartSLOduration=53.2109848 podStartE2EDuration="53.2109848s" podCreationTimestamp="2025-11-08 00:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:22:43.206946117 +0000 UTC m=+59.003046383" watchObservedRunningTime="2025-11-08 00:22:43.2109848 +0000 UTC m=+59.007085064" Nov 8 00:22:43.308980 systemd-networkd[1350]: vxlan.calico: Link UP Nov 8 00:22:43.308995 systemd-networkd[1350]: vxlan.calico: Gained carrier Nov 8 00:22:43.366369 containerd[1469]: time="2025-11-08T00:22:43.365567755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jjkjr,Uid:89a1f27b-cb85-45f6-a4b2-8e67e3f028ce,Namespace:kube-system,Attempt:1,} returns sandbox id \"bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2\"" Nov 8 00:22:43.393681 kubelet[2509]: E1108 00:22:43.391600 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:43.402651 containerd[1469]: time="2025-11-08T00:22:43.400662511Z" level=info msg="CreateContainer within sandbox \"bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:22:43.428845 containerd[1469]: time="2025-11-08T00:22:43.428764582Z" level=info msg="CreateContainer within sandbox \"bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a446199f7b63a7bf43048ac1e2a258730a2e9208f57c5a521ba5cf97945cb93\"" Nov 8 00:22:43.432313 containerd[1469]: time="2025-11-08T00:22:43.431827213Z" level=info msg="StartContainer for \"9a446199f7b63a7bf43048ac1e2a258730a2e9208f57c5a521ba5cf97945cb93\"" Nov 8 00:22:43.504468 systemd[1]: Started cri-containerd-9a446199f7b63a7bf43048ac1e2a258730a2e9208f57c5a521ba5cf97945cb93.scope - libcontainer container 9a446199f7b63a7bf43048ac1e2a258730a2e9208f57c5a521ba5cf97945cb93. Nov 8 00:22:43.673315 containerd[1469]: time="2025-11-08T00:22:43.672066199Z" level=info msg="StartContainer for \"9a446199f7b63a7bf43048ac1e2a258730a2e9208f57c5a521ba5cf97945cb93\" returns successfully" Nov 8 00:22:43.932421 systemd-networkd[1350]: cali6654c8ac669: Gained IPv6LL Nov 8 00:22:44.032224 kubelet[2509]: E1108 00:22:44.031683 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:44.034684 kubelet[2509]: E1108 00:22:44.034380 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:44.036010 kubelet[2509]: E1108 00:22:44.034748 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58dd75b54-57vp2" podUID="42409f77-f298-4938-9e62-f71427e3d95e" Nov 8 00:22:44.150849 kubelet[2509]: I1108 00:22:44.150503 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jjkjr" podStartSLOduration=54.150471198 podStartE2EDuration="54.150471198s" podCreationTimestamp="2025-11-08 00:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:22:44.113103035 +0000 UTC m=+59.909203299" watchObservedRunningTime="2025-11-08 00:22:44.150471198 +0000 UTC m=+59.946571461" Nov 8 00:22:44.448316 containerd[1469]: time="2025-11-08T00:22:44.448259430Z" level=info msg="StopPodSandbox for \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\"" Nov 8 00:22:44.697228 containerd[1469]: 2025-11-08 00:22:44.586 [WARNING][4872] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"89a1f27b-cb85-45f6-a4b2-8e67e3f028ce", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2", Pod:"coredns-668d6bf9bc-jjkjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd48146bae3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:44.697228 containerd[1469]: 2025-11-08 00:22:44.587 [INFO][4872] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Nov 8 00:22:44.697228 containerd[1469]: 2025-11-08 00:22:44.587 [INFO][4872] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" iface="eth0" netns="" Nov 8 00:22:44.697228 containerd[1469]: 2025-11-08 00:22:44.587 [INFO][4872] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Nov 8 00:22:44.697228 containerd[1469]: 2025-11-08 00:22:44.587 [INFO][4872] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Nov 8 00:22:44.697228 containerd[1469]: 2025-11-08 00:22:44.665 [INFO][4885] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" HandleID="k8s-pod-network.02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0" Nov 8 00:22:44.697228 containerd[1469]: 2025-11-08 00:22:44.666 [INFO][4885] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:44.697228 containerd[1469]: 2025-11-08 00:22:44.666 [INFO][4885] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:44.697228 containerd[1469]: 2025-11-08 00:22:44.684 [WARNING][4885] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" HandleID="k8s-pod-network.02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0" Nov 8 00:22:44.697228 containerd[1469]: 2025-11-08 00:22:44.684 [INFO][4885] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" HandleID="k8s-pod-network.02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0" Nov 8 00:22:44.697228 containerd[1469]: 2025-11-08 00:22:44.690 [INFO][4885] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:44.697228 containerd[1469]: 2025-11-08 00:22:44.693 [INFO][4872] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Nov 8 00:22:44.697228 containerd[1469]: time="2025-11-08T00:22:44.696584681Z" level=info msg="TearDown network for sandbox \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\" successfully" Nov 8 00:22:44.697228 containerd[1469]: time="2025-11-08T00:22:44.696615695Z" level=info msg="StopPodSandbox for \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\" returns successfully" Nov 8 00:22:44.699567 containerd[1469]: time="2025-11-08T00:22:44.697831417Z" level=info msg="RemovePodSandbox for \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\"" Nov 8 00:22:44.699567 containerd[1469]: time="2025-11-08T00:22:44.697892358Z" level=info msg="Forcibly stopping sandbox \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\"" Nov 8 00:22:44.701763 systemd-networkd[1350]: califd48146bae3: Gained IPv6LL Nov 8 00:22:44.765438 systemd-networkd[1350]: vxlan.calico: Gained IPv6LL Nov 8 00:22:44.856546 containerd[1469]: 2025-11-08 00:22:44.789 [WARNING][4904] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"89a1f27b-cb85-45f6-a4b2-8e67e3f028ce", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"bd135e01c4a93b38887c175af78f0e5ea5ed5a1ba35624b1995ab2cc5a14f5b2", Pod:"coredns-668d6bf9bc-jjkjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd48146bae3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:44.856546 containerd[1469]: 2025-11-08 00:22:44.791 [INFO][4904] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Nov 8 00:22:44.856546 containerd[1469]: 2025-11-08 00:22:44.791 [INFO][4904] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" iface="eth0" netns="" Nov 8 00:22:44.856546 containerd[1469]: 2025-11-08 00:22:44.791 [INFO][4904] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Nov 8 00:22:44.856546 containerd[1469]: 2025-11-08 00:22:44.791 [INFO][4904] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Nov 8 00:22:44.856546 containerd[1469]: 2025-11-08 00:22:44.833 [INFO][4911] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" HandleID="k8s-pod-network.02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0" Nov 8 00:22:44.856546 containerd[1469]: 2025-11-08 00:22:44.833 [INFO][4911] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:44.856546 containerd[1469]: 2025-11-08 00:22:44.833 [INFO][4911] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:44.856546 containerd[1469]: 2025-11-08 00:22:44.845 [WARNING][4911] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" HandleID="k8s-pod-network.02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0" Nov 8 00:22:44.856546 containerd[1469]: 2025-11-08 00:22:44.845 [INFO][4911] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" HandleID="k8s-pod-network.02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--jjkjr-eth0" Nov 8 00:22:44.856546 containerd[1469]: 2025-11-08 00:22:44.849 [INFO][4911] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:44.856546 containerd[1469]: 2025-11-08 00:22:44.854 [INFO][4904] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97" Nov 8 00:22:44.857679 containerd[1469]: time="2025-11-08T00:22:44.856584123Z" level=info msg="TearDown network for sandbox \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\" successfully" Nov 8 00:22:44.863823 containerd[1469]: time="2025-11-08T00:22:44.863749816Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:22:44.864046 containerd[1469]: time="2025-11-08T00:22:44.863868473Z" level=info msg="RemovePodSandbox \"02f71390281fe7e931ae9f60881cf6d084b1054581dab2dd22338b9003b81b97\" returns successfully" Nov 8 00:22:44.865234 containerd[1469]: time="2025-11-08T00:22:44.864796570Z" level=info msg="StopPodSandbox for \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\"" Nov 8 00:22:45.010534 containerd[1469]: 2025-11-08 00:22:44.942 [WARNING][4925] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-whisker--8dcf856dd--sg2q5-eth0" Nov 8 00:22:45.010534 containerd[1469]: 2025-11-08 00:22:44.943 [INFO][4925] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Nov 8 00:22:45.010534 containerd[1469]: 2025-11-08 00:22:44.943 [INFO][4925] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" iface="eth0" netns="" Nov 8 00:22:45.010534 containerd[1469]: 2025-11-08 00:22:44.943 [INFO][4925] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Nov 8 00:22:45.010534 containerd[1469]: 2025-11-08 00:22:44.943 [INFO][4925] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Nov 8 00:22:45.010534 containerd[1469]: 2025-11-08 00:22:44.981 [INFO][4932] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" HandleID="k8s-pod-network.881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Workload="ci--4081.3.6--n--6d313a6df2-k8s-whisker--8dcf856dd--sg2q5-eth0" Nov 8 00:22:45.010534 containerd[1469]: 2025-11-08 00:22:44.982 [INFO][4932] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:45.010534 containerd[1469]: 2025-11-08 00:22:44.982 [INFO][4932] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:45.010534 containerd[1469]: 2025-11-08 00:22:44.996 [WARNING][4932] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" HandleID="k8s-pod-network.881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Workload="ci--4081.3.6--n--6d313a6df2-k8s-whisker--8dcf856dd--sg2q5-eth0" Nov 8 00:22:45.010534 containerd[1469]: 2025-11-08 00:22:44.996 [INFO][4932] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" HandleID="k8s-pod-network.881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Workload="ci--4081.3.6--n--6d313a6df2-k8s-whisker--8dcf856dd--sg2q5-eth0" Nov 8 00:22:45.010534 containerd[1469]: 2025-11-08 00:22:45.001 [INFO][4932] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:45.010534 containerd[1469]: 2025-11-08 00:22:45.006 [INFO][4925] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Nov 8 00:22:45.012794 containerd[1469]: time="2025-11-08T00:22:45.011707627Z" level=info msg="TearDown network for sandbox \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\" successfully" Nov 8 00:22:45.012794 containerd[1469]: time="2025-11-08T00:22:45.011821449Z" level=info msg="StopPodSandbox for \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\" returns successfully" Nov 8 00:22:45.012794 containerd[1469]: time="2025-11-08T00:22:45.012751679Z" level=info msg="RemovePodSandbox for \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\"" Nov 8 00:22:45.012794 containerd[1469]: time="2025-11-08T00:22:45.012794515Z" level=info msg="Forcibly stopping sandbox \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\"" Nov 8 00:22:45.038119 kubelet[2509]: E1108 00:22:45.037889 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:45.038119 kubelet[2509]: E1108 00:22:45.038001 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:45.164575 containerd[1469]: 2025-11-08 00:22:45.093 [WARNING][4946] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" WorkloadEndpoint="ci--4081.3.6--n--6d313a6df2-k8s-whisker--8dcf856dd--sg2q5-eth0" Nov 8 00:22:45.164575 containerd[1469]: 2025-11-08 00:22:45.093 [INFO][4946] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Nov 8 00:22:45.164575 containerd[1469]: 2025-11-08 00:22:45.093 [INFO][4946] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" iface="eth0" netns="" Nov 8 00:22:45.164575 containerd[1469]: 2025-11-08 00:22:45.093 [INFO][4946] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Nov 8 00:22:45.164575 containerd[1469]: 2025-11-08 00:22:45.093 [INFO][4946] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Nov 8 00:22:45.164575 containerd[1469]: 2025-11-08 00:22:45.137 [INFO][4953] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" HandleID="k8s-pod-network.881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Workload="ci--4081.3.6--n--6d313a6df2-k8s-whisker--8dcf856dd--sg2q5-eth0" Nov 8 00:22:45.164575 containerd[1469]: 2025-11-08 00:22:45.137 [INFO][4953] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:45.164575 containerd[1469]: 2025-11-08 00:22:45.137 [INFO][4953] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:45.164575 containerd[1469]: 2025-11-08 00:22:45.147 [WARNING][4953] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" HandleID="k8s-pod-network.881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Workload="ci--4081.3.6--n--6d313a6df2-k8s-whisker--8dcf856dd--sg2q5-eth0" Nov 8 00:22:45.164575 containerd[1469]: 2025-11-08 00:22:45.147 [INFO][4953] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" HandleID="k8s-pod-network.881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Workload="ci--4081.3.6--n--6d313a6df2-k8s-whisker--8dcf856dd--sg2q5-eth0" Nov 8 00:22:45.164575 containerd[1469]: 2025-11-08 00:22:45.157 [INFO][4953] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:45.164575 containerd[1469]: 2025-11-08 00:22:45.161 [INFO][4946] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070" Nov 8 00:22:45.165645 containerd[1469]: time="2025-11-08T00:22:45.165429391Z" level=info msg="TearDown network for sandbox \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\" successfully" Nov 8 00:22:45.170982 containerd[1469]: time="2025-11-08T00:22:45.170878389Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:22:45.170982 containerd[1469]: time="2025-11-08T00:22:45.170985299Z" level=info msg="RemovePodSandbox \"881dac4823e2fc48b44f139339f39673b43acf8ed7f24c418d2cd014e34ee070\" returns successfully" Nov 8 00:22:45.172430 containerd[1469]: time="2025-11-08T00:22:45.171780155Z" level=info msg="StopPodSandbox for \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\"" Nov 8 00:22:45.345779 containerd[1469]: 2025-11-08 00:22:45.265 [WARNING][4968] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3fc490ed-6d34-41fd-bb44-ba621857b51e", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238", Pod:"goldmane-666569f655-7jxzl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.45.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif1a81badbf4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:45.345779 containerd[1469]: 2025-11-08 00:22:45.266 [INFO][4968] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Nov 8 00:22:45.345779 containerd[1469]: 2025-11-08 00:22:45.266 [INFO][4968] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" iface="eth0" netns="" Nov 8 00:22:45.345779 containerd[1469]: 2025-11-08 00:22:45.266 [INFO][4968] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Nov 8 00:22:45.345779 containerd[1469]: 2025-11-08 00:22:45.266 [INFO][4968] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Nov 8 00:22:45.345779 containerd[1469]: 2025-11-08 00:22:45.316 [INFO][4978] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" HandleID="k8s-pod-network.41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Workload="ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0" Nov 8 00:22:45.345779 containerd[1469]: 2025-11-08 00:22:45.317 [INFO][4978] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:45.345779 containerd[1469]: 2025-11-08 00:22:45.317 [INFO][4978] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:45.345779 containerd[1469]: 2025-11-08 00:22:45.330 [WARNING][4978] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" HandleID="k8s-pod-network.41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Workload="ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0" Nov 8 00:22:45.345779 containerd[1469]: 2025-11-08 00:22:45.330 [INFO][4978] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" HandleID="k8s-pod-network.41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Workload="ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0" Nov 8 00:22:45.345779 containerd[1469]: 2025-11-08 00:22:45.336 [INFO][4978] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:45.345779 containerd[1469]: 2025-11-08 00:22:45.341 [INFO][4968] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Nov 8 00:22:45.348437 containerd[1469]: time="2025-11-08T00:22:45.346366076Z" level=info msg="TearDown network for sandbox \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\" successfully" Nov 8 00:22:45.348437 containerd[1469]: time="2025-11-08T00:22:45.346411421Z" level=info msg="StopPodSandbox for \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\" returns successfully" Nov 8 00:22:45.348437 containerd[1469]: time="2025-11-08T00:22:45.347864008Z" level=info msg="RemovePodSandbox for \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\"" Nov 8 00:22:45.348437 containerd[1469]: time="2025-11-08T00:22:45.347909845Z" level=info msg="Forcibly stopping sandbox \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\"" Nov 8 00:22:45.490408 containerd[1469]: 2025-11-08 00:22:45.425 [WARNING][4992] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3fc490ed-6d34-41fd-bb44-ba621857b51e", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"2e0c030880938b8525a5f94beecd0bde7ac191924a0ceeb3afe0fb4d67b7c238", Pod:"goldmane-666569f655-7jxzl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.45.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif1a81badbf4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:45.490408 containerd[1469]: 2025-11-08 00:22:45.426 [INFO][4992] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Nov 8 00:22:45.490408 containerd[1469]: 2025-11-08 00:22:45.426 [INFO][4992] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" iface="eth0" netns="" Nov 8 00:22:45.490408 containerd[1469]: 2025-11-08 00:22:45.426 [INFO][4992] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Nov 8 00:22:45.490408 containerd[1469]: 2025-11-08 00:22:45.426 [INFO][4992] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Nov 8 00:22:45.490408 containerd[1469]: 2025-11-08 00:22:45.466 [INFO][5000] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" HandleID="k8s-pod-network.41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Workload="ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0" Nov 8 00:22:45.490408 containerd[1469]: 2025-11-08 00:22:45.467 [INFO][5000] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:45.490408 containerd[1469]: 2025-11-08 00:22:45.467 [INFO][5000] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:45.490408 containerd[1469]: 2025-11-08 00:22:45.478 [WARNING][5000] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" HandleID="k8s-pod-network.41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Workload="ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0" Nov 8 00:22:45.490408 containerd[1469]: 2025-11-08 00:22:45.478 [INFO][5000] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" HandleID="k8s-pod-network.41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Workload="ci--4081.3.6--n--6d313a6df2-k8s-goldmane--666569f655--7jxzl-eth0" Nov 8 00:22:45.490408 containerd[1469]: 2025-11-08 00:22:45.482 [INFO][5000] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:45.490408 containerd[1469]: 2025-11-08 00:22:45.486 [INFO][4992] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21" Nov 8 00:22:45.491898 containerd[1469]: time="2025-11-08T00:22:45.491378937Z" level=info msg="TearDown network for sandbox \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\" successfully" Nov 8 00:22:45.496151 containerd[1469]: time="2025-11-08T00:22:45.496056658Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:22:45.496520 containerd[1469]: time="2025-11-08T00:22:45.496165695Z" level=info msg="RemovePodSandbox \"41934f83d6c09b4759d9b73176859e3b97b141524b45af5afffce03991c17b21\" returns successfully" Nov 8 00:22:45.497804 containerd[1469]: time="2025-11-08T00:22:45.497353054Z" level=info msg="StopPodSandbox for \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\"" Nov 8 00:22:45.657924 containerd[1469]: 2025-11-08 00:22:45.571 [WARNING][5014] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0", GenerateName:"calico-kube-controllers-c57565dbd-", Namespace:"calico-system", SelfLink:"", UID:"560dd5bf-b92f-472c-9028-b374dabf58bb", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c57565dbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11", Pod:"calico-kube-controllers-c57565dbd-rlbqk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80bc9b8febb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:45.657924 containerd[1469]: 2025-11-08 00:22:45.572 [INFO][5014] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Nov 8 00:22:45.657924 containerd[1469]: 2025-11-08 00:22:45.572 [INFO][5014] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" iface="eth0" netns="" Nov 8 00:22:45.657924 containerd[1469]: 2025-11-08 00:22:45.572 [INFO][5014] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Nov 8 00:22:45.657924 containerd[1469]: 2025-11-08 00:22:45.572 [INFO][5014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Nov 8 00:22:45.657924 containerd[1469]: 2025-11-08 00:22:45.624 [INFO][5021] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" HandleID="k8s-pod-network.2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0" Nov 8 00:22:45.657924 containerd[1469]: 2025-11-08 00:22:45.624 [INFO][5021] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:45.657924 containerd[1469]: 2025-11-08 00:22:45.624 [INFO][5021] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:45.657924 containerd[1469]: 2025-11-08 00:22:45.641 [WARNING][5021] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" HandleID="k8s-pod-network.2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0" Nov 8 00:22:45.657924 containerd[1469]: 2025-11-08 00:22:45.641 [INFO][5021] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" HandleID="k8s-pod-network.2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0" Nov 8 00:22:45.657924 containerd[1469]: 2025-11-08 00:22:45.649 [INFO][5021] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:45.657924 containerd[1469]: 2025-11-08 00:22:45.653 [INFO][5014] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Nov 8 00:22:45.657924 containerd[1469]: time="2025-11-08T00:22:45.657896631Z" level=info msg="TearDown network for sandbox \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\" successfully" Nov 8 00:22:45.661120 containerd[1469]: time="2025-11-08T00:22:45.657943959Z" level=info msg="StopPodSandbox for \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\" returns successfully" Nov 8 00:22:45.661120 containerd[1469]: time="2025-11-08T00:22:45.659220531Z" level=info msg="RemovePodSandbox for \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\"" Nov 8 00:22:45.661120 containerd[1469]: time="2025-11-08T00:22:45.659278522Z" level=info msg="Forcibly stopping sandbox \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\"" Nov 8 00:22:45.831612 containerd[1469]: 2025-11-08 00:22:45.734 [WARNING][5036] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0", GenerateName:"calico-kube-controllers-c57565dbd-", Namespace:"calico-system", SelfLink:"", UID:"560dd5bf-b92f-472c-9028-b374dabf58bb", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c57565dbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"34ba30e3ac5491ec7da10587ae08e6729b49e0eea17db341ce091c7997874b11", Pod:"calico-kube-controllers-c57565dbd-rlbqk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80bc9b8febb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:45.831612 containerd[1469]: 2025-11-08 00:22:45.734 [INFO][5036] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Nov 8 00:22:45.831612 containerd[1469]: 2025-11-08 00:22:45.734 [INFO][5036] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" iface="eth0" netns="" Nov 8 00:22:45.831612 containerd[1469]: 2025-11-08 00:22:45.734 [INFO][5036] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Nov 8 00:22:45.831612 containerd[1469]: 2025-11-08 00:22:45.734 [INFO][5036] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Nov 8 00:22:45.831612 containerd[1469]: 2025-11-08 00:22:45.795 [INFO][5043] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" HandleID="k8s-pod-network.2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0" Nov 8 00:22:45.831612 containerd[1469]: 2025-11-08 00:22:45.796 [INFO][5043] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:45.831612 containerd[1469]: 2025-11-08 00:22:45.796 [INFO][5043] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:45.831612 containerd[1469]: 2025-11-08 00:22:45.816 [WARNING][5043] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" HandleID="k8s-pod-network.2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0" Nov 8 00:22:45.831612 containerd[1469]: 2025-11-08 00:22:45.816 [INFO][5043] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" HandleID="k8s-pod-network.2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--kube--controllers--c57565dbd--rlbqk-eth0" Nov 8 00:22:45.831612 containerd[1469]: 2025-11-08 00:22:45.820 [INFO][5043] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:45.831612 containerd[1469]: 2025-11-08 00:22:45.825 [INFO][5036] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7" Nov 8 00:22:45.831612 containerd[1469]: time="2025-11-08T00:22:45.830969203Z" level=info msg="TearDown network for sandbox \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\" successfully" Nov 8 00:22:45.838425 containerd[1469]: time="2025-11-08T00:22:45.838155637Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:22:45.838425 containerd[1469]: time="2025-11-08T00:22:45.838291904Z" level=info msg="RemovePodSandbox \"2a1c2e56db55f5830112a91f1a6cbeb1a09dda549cbe643e66093f715f085fc7\" returns successfully" Nov 8 00:22:45.839495 containerd[1469]: time="2025-11-08T00:22:45.839440976Z" level=info msg="StopPodSandbox for \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\"" Nov 8 00:22:46.065731 kubelet[2509]: E1108 00:22:46.064144 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:46.069771 kubelet[2509]: E1108 00:22:46.069696 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:46.083331 containerd[1469]: 2025-11-08 00:22:45.940 [WARNING][5057] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"daa9f29d-2835-4e9f-8181-7aeaf654817a", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7", Pod:"csi-node-driver-7q5q2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicbd12fc0c0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:46.083331 containerd[1469]: 2025-11-08 00:22:45.941 [INFO][5057] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Nov 8 00:22:46.083331 containerd[1469]: 2025-11-08 00:22:45.941 [INFO][5057] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" iface="eth0" netns="" Nov 8 00:22:46.083331 containerd[1469]: 2025-11-08 00:22:45.942 [INFO][5057] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Nov 8 00:22:46.083331 containerd[1469]: 2025-11-08 00:22:45.942 [INFO][5057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Nov 8 00:22:46.083331 containerd[1469]: 2025-11-08 00:22:46.039 [INFO][5065] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" HandleID="k8s-pod-network.9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Workload="ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0" Nov 8 00:22:46.083331 containerd[1469]: 2025-11-08 00:22:46.039 [INFO][5065] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:46.083331 containerd[1469]: 2025-11-08 00:22:46.039 [INFO][5065] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:46.083331 containerd[1469]: 2025-11-08 00:22:46.059 [WARNING][5065] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" HandleID="k8s-pod-network.9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Workload="ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0" Nov 8 00:22:46.083331 containerd[1469]: 2025-11-08 00:22:46.060 [INFO][5065] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" HandleID="k8s-pod-network.9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Workload="ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0" Nov 8 00:22:46.083331 containerd[1469]: 2025-11-08 00:22:46.070 [INFO][5065] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:46.083331 containerd[1469]: 2025-11-08 00:22:46.076 [INFO][5057] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Nov 8 00:22:46.083331 containerd[1469]: time="2025-11-08T00:22:46.082740988Z" level=info msg="TearDown network for sandbox \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\" successfully" Nov 8 00:22:46.083331 containerd[1469]: time="2025-11-08T00:22:46.082786566Z" level=info msg="StopPodSandbox for \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\" returns successfully" Nov 8 00:22:46.085812 containerd[1469]: time="2025-11-08T00:22:46.085216093Z" level=info msg="RemovePodSandbox for \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\"" Nov 8 00:22:46.085812 containerd[1469]: time="2025-11-08T00:22:46.085340413Z" level=info msg="Forcibly stopping sandbox \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\"" Nov 8 00:22:46.278737 containerd[1469]: 2025-11-08 00:22:46.188 [WARNING][5079] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"daa9f29d-2835-4e9f-8181-7aeaf654817a", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"1dc9f0881205e36912d245d954a698d4fb3625758584148cd2b4c2de8a7ca5b7", Pod:"csi-node-driver-7q5q2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicbd12fc0c0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:46.278737 containerd[1469]: 2025-11-08 00:22:46.188 [INFO][5079] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Nov 8 00:22:46.278737 containerd[1469]: 2025-11-08 00:22:46.188 [INFO][5079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" iface="eth0" netns="" Nov 8 00:22:46.278737 containerd[1469]: 2025-11-08 00:22:46.188 [INFO][5079] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Nov 8 00:22:46.278737 containerd[1469]: 2025-11-08 00:22:46.188 [INFO][5079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Nov 8 00:22:46.278737 containerd[1469]: 2025-11-08 00:22:46.246 [INFO][5086] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" HandleID="k8s-pod-network.9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Workload="ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0" Nov 8 00:22:46.278737 containerd[1469]: 2025-11-08 00:22:46.248 [INFO][5086] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:46.278737 containerd[1469]: 2025-11-08 00:22:46.248 [INFO][5086] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:46.278737 containerd[1469]: 2025-11-08 00:22:46.267 [WARNING][5086] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" HandleID="k8s-pod-network.9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Workload="ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0" Nov 8 00:22:46.278737 containerd[1469]: 2025-11-08 00:22:46.267 [INFO][5086] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" HandleID="k8s-pod-network.9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Workload="ci--4081.3.6--n--6d313a6df2-k8s-csi--node--driver--7q5q2-eth0" Nov 8 00:22:46.278737 containerd[1469]: 2025-11-08 00:22:46.271 [INFO][5086] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:46.278737 containerd[1469]: 2025-11-08 00:22:46.275 [INFO][5079] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a" Nov 8 00:22:46.278737 containerd[1469]: time="2025-11-08T00:22:46.278675167Z" level=info msg="TearDown network for sandbox \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\" successfully" Nov 8 00:22:46.287311 containerd[1469]: time="2025-11-08T00:22:46.287119676Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:22:46.287501 containerd[1469]: time="2025-11-08T00:22:46.287339753Z" level=info msg="RemovePodSandbox \"9d61751393ec1c7a04ee1d739632b935ad723913748b979c27eae0b55bc0ce4a\" returns successfully" Nov 8 00:22:46.288411 containerd[1469]: time="2025-11-08T00:22:46.288292135Z" level=info msg="StopPodSandbox for \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\"" Nov 8 00:22:46.477061 containerd[1469]: 2025-11-08 00:22:46.393 [WARNING][5101] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0", GenerateName:"calico-apiserver-58dd75b54-", Namespace:"calico-apiserver", SelfLink:"", UID:"31b491a3-55cf-4c4e-922c-621192b0de8f", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58dd75b54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4", Pod:"calico-apiserver-58dd75b54-s7bcs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cda4c7db43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:46.477061 containerd[1469]: 2025-11-08 00:22:46.394 [INFO][5101] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Nov 8 00:22:46.477061 containerd[1469]: 2025-11-08 00:22:46.394 [INFO][5101] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" iface="eth0" netns="" Nov 8 00:22:46.477061 containerd[1469]: 2025-11-08 00:22:46.394 [INFO][5101] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Nov 8 00:22:46.477061 containerd[1469]: 2025-11-08 00:22:46.394 [INFO][5101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Nov 8 00:22:46.477061 containerd[1469]: 2025-11-08 00:22:46.452 [INFO][5109] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" HandleID="k8s-pod-network.827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0" Nov 8 00:22:46.477061 containerd[1469]: 2025-11-08 00:22:46.453 [INFO][5109] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:46.477061 containerd[1469]: 2025-11-08 00:22:46.453 [INFO][5109] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:46.477061 containerd[1469]: 2025-11-08 00:22:46.465 [WARNING][5109] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" HandleID="k8s-pod-network.827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0" Nov 8 00:22:46.477061 containerd[1469]: 2025-11-08 00:22:46.466 [INFO][5109] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" HandleID="k8s-pod-network.827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0" Nov 8 00:22:46.477061 containerd[1469]: 2025-11-08 00:22:46.470 [INFO][5109] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:46.477061 containerd[1469]: 2025-11-08 00:22:46.473 [INFO][5101] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Nov 8 00:22:46.482972 containerd[1469]: time="2025-11-08T00:22:46.478051571Z" level=info msg="TearDown network for sandbox \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\" successfully" Nov 8 00:22:46.482972 containerd[1469]: time="2025-11-08T00:22:46.478174307Z" level=info msg="StopPodSandbox for \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\" returns successfully" Nov 8 00:22:46.482972 containerd[1469]: time="2025-11-08T00:22:46.481549878Z" level=info msg="RemovePodSandbox for \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\"" Nov 8 00:22:46.482972 containerd[1469]: time="2025-11-08T00:22:46.481612541Z" level=info msg="Forcibly stopping sandbox \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\"" Nov 8 00:22:46.655456 containerd[1469]: 2025-11-08 00:22:46.573 [WARNING][5124] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0", GenerateName:"calico-apiserver-58dd75b54-", Namespace:"calico-apiserver", SelfLink:"", UID:"31b491a3-55cf-4c4e-922c-621192b0de8f", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58dd75b54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"b907c08d8a12d924786117c949db080e7b19e2bdcb797d2b6bdc3766199e8fd4", Pod:"calico-apiserver-58dd75b54-s7bcs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cda4c7db43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:46.655456 containerd[1469]: 2025-11-08 00:22:46.574 [INFO][5124] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Nov 8 00:22:46.655456 containerd[1469]: 2025-11-08 00:22:46.574 [INFO][5124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" iface="eth0" netns="" Nov 8 00:22:46.655456 containerd[1469]: 2025-11-08 00:22:46.574 [INFO][5124] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Nov 8 00:22:46.655456 containerd[1469]: 2025-11-08 00:22:46.574 [INFO][5124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Nov 8 00:22:46.655456 containerd[1469]: 2025-11-08 00:22:46.632 [INFO][5131] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" HandleID="k8s-pod-network.827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0" Nov 8 00:22:46.655456 containerd[1469]: 2025-11-08 00:22:46.632 [INFO][5131] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:46.655456 containerd[1469]: 2025-11-08 00:22:46.633 [INFO][5131] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:46.655456 containerd[1469]: 2025-11-08 00:22:46.643 [WARNING][5131] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" HandleID="k8s-pod-network.827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0" Nov 8 00:22:46.655456 containerd[1469]: 2025-11-08 00:22:46.644 [INFO][5131] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" HandleID="k8s-pod-network.827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--s7bcs-eth0" Nov 8 00:22:46.655456 containerd[1469]: 2025-11-08 00:22:46.647 [INFO][5131] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:46.655456 containerd[1469]: 2025-11-08 00:22:46.651 [INFO][5124] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5" Nov 8 00:22:46.659756 containerd[1469]: time="2025-11-08T00:22:46.655478975Z" level=info msg="TearDown network for sandbox \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\" successfully" Nov 8 00:22:46.671190 containerd[1469]: time="2025-11-08T00:22:46.671122078Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:22:46.671396 containerd[1469]: time="2025-11-08T00:22:46.671264377Z" level=info msg="RemovePodSandbox \"827ea42b6a23d04286792527797dd8ef02625b140651d66577be89acd8b420b5\" returns successfully" Nov 8 00:22:46.673401 containerd[1469]: time="2025-11-08T00:22:46.673338641Z" level=info msg="StopPodSandbox for \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\"" Nov 8 00:22:46.806931 systemd[1]: Started sshd@9-24.199.105.232:22-139.178.68.195:46930.service - OpenSSH per-connection server daemon (139.178.68.195:46930). Nov 8 00:22:46.856662 containerd[1469]: 2025-11-08 00:22:46.760 [WARNING][5145] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0", GenerateName:"calico-apiserver-58dd75b54-", Namespace:"calico-apiserver", SelfLink:"", UID:"42409f77-f298-4938-9e62-f71427e3d95e", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58dd75b54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d", Pod:"calico-apiserver-58dd75b54-57vp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali53e9612fd5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:46.856662 containerd[1469]: 2025-11-08 00:22:46.761 [INFO][5145] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Nov 8 00:22:46.856662 containerd[1469]: 2025-11-08 00:22:46.761 [INFO][5145] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" iface="eth0" netns="" Nov 8 00:22:46.856662 containerd[1469]: 2025-11-08 00:22:46.761 [INFO][5145] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Nov 8 00:22:46.856662 containerd[1469]: 2025-11-08 00:22:46.761 [INFO][5145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Nov 8 00:22:46.856662 containerd[1469]: 2025-11-08 00:22:46.821 [INFO][5152] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" HandleID="k8s-pod-network.82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0" Nov 8 00:22:46.856662 containerd[1469]: 2025-11-08 00:22:46.821 [INFO][5152] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:46.856662 containerd[1469]: 2025-11-08 00:22:46.822 [INFO][5152] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:46.856662 containerd[1469]: 2025-11-08 00:22:46.836 [WARNING][5152] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" HandleID="k8s-pod-network.82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0" Nov 8 00:22:46.856662 containerd[1469]: 2025-11-08 00:22:46.836 [INFO][5152] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" HandleID="k8s-pod-network.82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0" Nov 8 00:22:46.856662 containerd[1469]: 2025-11-08 00:22:46.840 [INFO][5152] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:46.856662 containerd[1469]: 2025-11-08 00:22:46.847 [INFO][5145] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Nov 8 00:22:46.856662 containerd[1469]: time="2025-11-08T00:22:46.856442224Z" level=info msg="TearDown network for sandbox \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\" successfully" Nov 8 00:22:46.856662 containerd[1469]: time="2025-11-08T00:22:46.856483006Z" level=info msg="StopPodSandbox for \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\" returns successfully" Nov 8 00:22:46.858524 containerd[1469]: time="2025-11-08T00:22:46.858065408Z" level=info msg="RemovePodSandbox for \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\"" Nov 8 00:22:46.858524 containerd[1469]: time="2025-11-08T00:22:46.858118550Z" level=info msg="Forcibly stopping sandbox \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\"" Nov 8 00:22:46.999236 sshd[5158]: Accepted publickey for core from 139.178.68.195 port 46930 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:47.004168 sshd[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:47.015140 systemd-logind[1444]: New session 9 of user core. Nov 8 00:22:47.023697 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:22:47.030856 containerd[1469]: 2025-11-08 00:22:46.957 [WARNING][5168] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0", GenerateName:"calico-apiserver-58dd75b54-", Namespace:"calico-apiserver", SelfLink:"", UID:"42409f77-f298-4938-9e62-f71427e3d95e", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 22, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58dd75b54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"5ec28f255d3d2e031175209dd719d2e7d44e0dcbf1c7615428d0bef99e99aa6d", Pod:"calico-apiserver-58dd75b54-57vp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali53e9612fd5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:47.030856 containerd[1469]: 2025-11-08 00:22:46.961 [INFO][5168] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Nov 8 00:22:47.030856 containerd[1469]: 2025-11-08 00:22:46.961 [INFO][5168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" iface="eth0" netns="" Nov 8 00:22:47.030856 containerd[1469]: 2025-11-08 00:22:46.961 [INFO][5168] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Nov 8 00:22:47.030856 containerd[1469]: 2025-11-08 00:22:46.961 [INFO][5168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Nov 8 00:22:47.030856 containerd[1469]: 2025-11-08 00:22:47.005 [INFO][5176] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" HandleID="k8s-pod-network.82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0" Nov 8 00:22:47.030856 containerd[1469]: 2025-11-08 00:22:47.005 [INFO][5176] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:47.030856 containerd[1469]: 2025-11-08 00:22:47.005 [INFO][5176] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:47.030856 containerd[1469]: 2025-11-08 00:22:47.018 [WARNING][5176] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" HandleID="k8s-pod-network.82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0" Nov 8 00:22:47.030856 containerd[1469]: 2025-11-08 00:22:47.018 [INFO][5176] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" HandleID="k8s-pod-network.82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Workload="ci--4081.3.6--n--6d313a6df2-k8s-calico--apiserver--58dd75b54--57vp2-eth0" Nov 8 00:22:47.030856 containerd[1469]: 2025-11-08 00:22:47.022 [INFO][5176] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:47.030856 containerd[1469]: 2025-11-08 00:22:47.026 [INFO][5168] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9" Nov 8 00:22:47.033416 containerd[1469]: time="2025-11-08T00:22:47.032595136Z" level=info msg="TearDown network for sandbox \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\" successfully" Nov 8 00:22:47.039340 containerd[1469]: time="2025-11-08T00:22:47.039030913Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:22:47.039340 containerd[1469]: time="2025-11-08T00:22:47.039173117Z" level=info msg="RemovePodSandbox \"82922fe7505a5bafe66147bb1659f78c8682f3130ed040ff3f0771c499bea7d9\" returns successfully" Nov 8 00:22:47.041160 containerd[1469]: time="2025-11-08T00:22:47.041065555Z" level=info msg="StopPodSandbox for \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\"" Nov 8 00:22:47.204275 containerd[1469]: 2025-11-08 00:22:47.123 [WARNING][5191] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f6d4dc87-9d2e-4afc-ab03-361e2e8d6f52", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5", Pod:"coredns-668d6bf9bc-rs7hj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6654c8ac669", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:47.204275 containerd[1469]: 2025-11-08 00:22:47.124 [INFO][5191] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Nov 8 00:22:47.204275 containerd[1469]: 2025-11-08 00:22:47.124 [INFO][5191] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" iface="eth0" netns="" Nov 8 00:22:47.204275 containerd[1469]: 2025-11-08 00:22:47.124 [INFO][5191] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Nov 8 00:22:47.204275 containerd[1469]: 2025-11-08 00:22:47.124 [INFO][5191] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Nov 8 00:22:47.204275 containerd[1469]: 2025-11-08 00:22:47.181 [INFO][5202] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" HandleID="k8s-pod-network.65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0" Nov 8 00:22:47.204275 containerd[1469]: 2025-11-08 00:22:47.182 [INFO][5202] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:47.204275 containerd[1469]: 2025-11-08 00:22:47.182 [INFO][5202] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:47.204275 containerd[1469]: 2025-11-08 00:22:47.194 [WARNING][5202] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" HandleID="k8s-pod-network.65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0" Nov 8 00:22:47.204275 containerd[1469]: 2025-11-08 00:22:47.194 [INFO][5202] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" HandleID="k8s-pod-network.65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0" Nov 8 00:22:47.204275 containerd[1469]: 2025-11-08 00:22:47.197 [INFO][5202] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:47.204275 containerd[1469]: 2025-11-08 00:22:47.200 [INFO][5191] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Nov 8 00:22:47.206082 containerd[1469]: time="2025-11-08T00:22:47.204326467Z" level=info msg="TearDown network for sandbox \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\" successfully" Nov 8 00:22:47.206082 containerd[1469]: time="2025-11-08T00:22:47.204369337Z" level=info msg="StopPodSandbox for \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\" returns successfully" Nov 8 00:22:47.208748 containerd[1469]: time="2025-11-08T00:22:47.208339723Z" level=info msg="RemovePodSandbox for \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\"" Nov 8 00:22:47.208748 containerd[1469]: time="2025-11-08T00:22:47.208457893Z" level=info msg="Forcibly stopping sandbox \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\"" Nov 8 00:22:47.408515 containerd[1469]: 2025-11-08 00:22:47.298 [WARNING][5220] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f6d4dc87-9d2e-4afc-ab03-361e2e8d6f52", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d313a6df2", ContainerID:"cd7f7decfde0aafe5f986995b6a260b3110a5fc074e34e7fa85dba187a3ae5d5", Pod:"coredns-668d6bf9bc-rs7hj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6654c8ac669", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:22:47.408515 containerd[1469]: 2025-11-08 00:22:47.299 [INFO][5220] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Nov 8 00:22:47.408515 containerd[1469]: 2025-11-08 00:22:47.299 [INFO][5220] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" iface="eth0" netns="" Nov 8 00:22:47.408515 containerd[1469]: 2025-11-08 00:22:47.299 [INFO][5220] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Nov 8 00:22:47.408515 containerd[1469]: 2025-11-08 00:22:47.299 [INFO][5220] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Nov 8 00:22:47.408515 containerd[1469]: 2025-11-08 00:22:47.353 [INFO][5227] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" HandleID="k8s-pod-network.65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0" Nov 8 00:22:47.408515 containerd[1469]: 2025-11-08 00:22:47.355 [INFO][5227] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:22:47.408515 containerd[1469]: 2025-11-08 00:22:47.355 [INFO][5227] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:22:47.408515 containerd[1469]: 2025-11-08 00:22:47.383 [WARNING][5227] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" HandleID="k8s-pod-network.65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0" Nov 8 00:22:47.408515 containerd[1469]: 2025-11-08 00:22:47.384 [INFO][5227] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" HandleID="k8s-pod-network.65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Workload="ci--4081.3.6--n--6d313a6df2-k8s-coredns--668d6bf9bc--rs7hj-eth0" Nov 8 00:22:47.408515 containerd[1469]: 2025-11-08 00:22:47.401 [INFO][5227] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:22:47.408515 containerd[1469]: 2025-11-08 00:22:47.405 [INFO][5220] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6" Nov 8 00:22:47.410825 containerd[1469]: time="2025-11-08T00:22:47.408604986Z" level=info msg="TearDown network for sandbox \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\" successfully" Nov 8 00:22:47.416232 containerd[1469]: time="2025-11-08T00:22:47.414251674Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:22:47.416232 containerd[1469]: time="2025-11-08T00:22:47.414349852Z" level=info msg="RemovePodSandbox \"65cd644653a974756d9341d540a91cd8e353d25924cdf808c415a0af7736b1a6\" returns successfully" Nov 8 00:22:47.519913 sshd[5158]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:47.528647 systemd[1]: sshd@9-24.199.105.232:22-139.178.68.195:46930.service: Deactivated successfully. Nov 8 00:22:47.533950 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:22:47.536067 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:22:47.540543 systemd-logind[1444]: Removed session 9. Nov 8 00:22:48.369996 containerd[1469]: time="2025-11-08T00:22:48.369929286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:22:48.720485 containerd[1469]: time="2025-11-08T00:22:48.720003082Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:48.721552 containerd[1469]: time="2025-11-08T00:22:48.721420756Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:22:48.721552 containerd[1469]: time="2025-11-08T00:22:48.721549500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:22:48.721988 kubelet[2509]: E1108 00:22:48.721878 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:22:48.721988 kubelet[2509]: E1108 00:22:48.721961 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:22:48.723176 kubelet[2509]: E1108 00:22:48.722150 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cfa83883e8f7431cbc54801fd68dfa44,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h9ck8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-97778988b-6hzb4_calico-system(6e6990fb-3126-46c4-96c6-a63ad2a68c21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:48.726338 containerd[1469]: time="2025-11-08T00:22:48.726282949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:22:49.072754 containerd[1469]: time="2025-11-08T00:22:49.072532596Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:49.073830 containerd[1469]: time="2025-11-08T00:22:49.073640269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:22:49.073830 containerd[1469]: time="2025-11-08T00:22:49.073751163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:22:49.074397 kubelet[2509]: E1108 00:22:49.074321 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:22:49.074397 kubelet[2509]: E1108 00:22:49.074395 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:22:49.074732 kubelet[2509]: E1108 00:22:49.074543 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h9ck8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-97778988b-6hzb4_calico-system(6e6990fb-3126-46c4-96c6-a63ad2a68c21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:49.076690 kubelet[2509]: E1108 00:22:49.076634 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-97778988b-6hzb4" podUID="6e6990fb-3126-46c4-96c6-a63ad2a68c21" Nov 8 00:22:50.365616 containerd[1469]: time="2025-11-08T00:22:50.364954458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:22:50.701577 containerd[1469]: time="2025-11-08T00:22:50.701351443Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:50.702848 containerd[1469]: time="2025-11-08T00:22:50.702753040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:22:50.703271 containerd[1469]: time="2025-11-08T00:22:50.702805242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:22:50.703354 kubelet[2509]: E1108 00:22:50.703106 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:22:50.703354 kubelet[2509]: E1108 00:22:50.703230 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:22:50.704163 kubelet[2509]: E1108 00:22:50.704046 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4l7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7q5q2_calico-system(daa9f29d-2835-4e9f-8181-7aeaf654817a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:50.710137 containerd[1469]: time="2025-11-08T00:22:50.710080982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:22:51.091149 containerd[1469]: time="2025-11-08T00:22:51.091060095Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:51.093262 containerd[1469]: time="2025-11-08T00:22:51.092939289Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:22:51.093262 containerd[1469]: time="2025-11-08T00:22:51.093030089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:22:51.093537 kubelet[2509]: E1108 00:22:51.093441 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:22:51.093537 kubelet[2509]: E1108 00:22:51.093520 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:22:51.093850 kubelet[2509]: E1108 00:22:51.093765 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4l7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7q5q2_calico-system(daa9f29d-2835-4e9f-8181-7aeaf654817a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:51.095107 kubelet[2509]: E1108 00:22:51.095031 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7q5q2" podUID="daa9f29d-2835-4e9f-8181-7aeaf654817a" Nov 8 00:22:51.367520 containerd[1469]: time="2025-11-08T00:22:51.365634018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:22:51.720072 containerd[1469]: time="2025-11-08T00:22:51.719859831Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:51.722736 containerd[1469]: time="2025-11-08T00:22:51.722594086Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:22:51.723886 containerd[1469]: time="2025-11-08T00:22:51.722750054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:22:51.724218 kubelet[2509]: E1108 00:22:51.723002 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:22:51.724218 kubelet[2509]: E1108 00:22:51.723152 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:22:51.724218 kubelet[2509]: E1108 00:22:51.723495 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jk5pt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7jxzl_calico-system(3fc490ed-6d34-41fd-bb44-ba621857b51e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:51.725448 kubelet[2509]: E1108 00:22:51.724913 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7jxzl" podUID="3fc490ed-6d34-41fd-bb44-ba621857b51e" Nov 8 00:22:52.543243 systemd[1]: Started sshd@10-24.199.105.232:22-139.178.68.195:46938.service - OpenSSH per-connection server daemon (139.178.68.195:46938). Nov 8 00:22:52.622795 sshd[5248]: Accepted publickey for core from 139.178.68.195 port 46938 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:52.625305 sshd[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:52.633457 systemd-logind[1444]: New session 10 of user core. Nov 8 00:22:52.640779 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:22:52.821888 sshd[5248]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:52.834385 systemd[1]: sshd@10-24.199.105.232:22-139.178.68.195:46938.service: Deactivated successfully. Nov 8 00:22:52.839593 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:22:52.842394 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:22:52.846557 systemd-logind[1444]: Removed session 10. Nov 8 00:22:52.853838 systemd[1]: Started sshd@11-24.199.105.232:22-139.178.68.195:46948.service - OpenSSH per-connection server daemon (139.178.68.195:46948). Nov 8 00:22:52.917606 sshd[5262]: Accepted publickey for core from 139.178.68.195 port 46948 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:52.920942 sshd[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:52.930101 systemd-logind[1444]: New session 11 of user core. Nov 8 00:22:52.935787 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:22:53.200956 sshd[5262]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:53.216550 systemd[1]: sshd@11-24.199.105.232:22-139.178.68.195:46948.service: Deactivated successfully. Nov 8 00:22:53.221745 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:22:53.226053 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:22:53.236051 systemd[1]: Started sshd@12-24.199.105.232:22-139.178.68.195:55922.service - OpenSSH per-connection server daemon (139.178.68.195:55922). Nov 8 00:22:53.241002 systemd-logind[1444]: Removed session 11. Nov 8 00:22:53.318061 sshd[5273]: Accepted publickey for core from 139.178.68.195 port 55922 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:53.320743 sshd[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:53.330333 systemd-logind[1444]: New session 12 of user core. Nov 8 00:22:53.336544 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:22:53.541223 sshd[5273]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:53.547188 systemd[1]: sshd@12-24.199.105.232:22-139.178.68.195:55922.service: Deactivated successfully. Nov 8 00:22:53.551083 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:22:53.552937 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:22:53.556374 systemd-logind[1444]: Removed session 12. Nov 8 00:22:54.372600 containerd[1469]: time="2025-11-08T00:22:54.371857343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:22:54.712680 containerd[1469]: time="2025-11-08T00:22:54.710580996Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:54.714277 containerd[1469]: time="2025-11-08T00:22:54.714062404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:22:54.714277 containerd[1469]: time="2025-11-08T00:22:54.714163971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:22:54.717216 kubelet[2509]: E1108 00:22:54.714925 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:22:54.717216 kubelet[2509]: E1108 00:22:54.715022 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:22:54.717216 kubelet[2509]: E1108 00:22:54.715463 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48n5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c57565dbd-rlbqk_calico-system(560dd5bf-b92f-472c-9028-b374dabf58bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:54.721872 kubelet[2509]: E1108 00:22:54.718923 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c57565dbd-rlbqk" podUID="560dd5bf-b92f-472c-9028-b374dabf58bb" Nov 8 00:22:54.722050 containerd[1469]: time="2025-11-08T00:22:54.718239828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:22:55.046065 containerd[1469]: time="2025-11-08T00:22:55.045861416Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:55.047186 containerd[1469]: time="2025-11-08T00:22:55.047109828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:22:55.047434 containerd[1469]: time="2025-11-08T00:22:55.047323356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:22:55.047683 kubelet[2509]: E1108 00:22:55.047626 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:55.047772 kubelet[2509]: E1108 00:22:55.047707 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:55.048461 kubelet[2509]: E1108 00:22:55.047912 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp8zh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58dd75b54-s7bcs_calico-apiserver(31b491a3-55cf-4c4e-922c-621192b0de8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:55.049570 kubelet[2509]: E1108 00:22:55.049406 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58dd75b54-s7bcs" podUID="31b491a3-55cf-4c4e-922c-621192b0de8f" Nov 8 00:22:56.365596 kubelet[2509]: E1108 00:22:56.364863 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:22:58.580960 systemd[1]: Started sshd@13-24.199.105.232:22-139.178.68.195:55926.service - OpenSSH per-connection server daemon (139.178.68.195:55926). Nov 8 00:22:58.649557 sshd[5298]: Accepted publickey for core from 139.178.68.195 port 55926 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:58.652984 sshd[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:58.668868 systemd-logind[1444]: New session 13 of user core. Nov 8 00:22:58.675665 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:22:58.922342 sshd[5298]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:58.930021 systemd[1]: sshd@13-24.199.105.232:22-139.178.68.195:55926.service: Deactivated successfully. Nov 8 00:22:58.934872 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:22:58.937062 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:22:58.940369 systemd-logind[1444]: Removed session 13. Nov 8 00:22:59.382501 containerd[1469]: time="2025-11-08T00:22:59.382367542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:22:59.384678 kubelet[2509]: E1108 00:22:59.384540 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-97778988b-6hzb4" podUID="6e6990fb-3126-46c4-96c6-a63ad2a68c21" Nov 8 00:22:59.747428 containerd[1469]: time="2025-11-08T00:22:59.746576518Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:59.750354 containerd[1469]: time="2025-11-08T00:22:59.750176690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:22:59.750354 containerd[1469]: time="2025-11-08T00:22:59.750230829Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:22:59.750899 kubelet[2509]: E1108 00:22:59.750820 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:59.750997 kubelet[2509]: E1108 00:22:59.750908 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:59.751246 kubelet[2509]: E1108 00:22:59.751146 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2ncw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58dd75b54-57vp2_calico-apiserver(42409f77-f298-4938-9e62-f71427e3d95e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:59.752777 kubelet[2509]: E1108 00:22:59.752676 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58dd75b54-57vp2" podUID="42409f77-f298-4938-9e62-f71427e3d95e" Nov 8 00:23:03.366850 kubelet[2509]: E1108 00:23:03.366766 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7q5q2" podUID="daa9f29d-2835-4e9f-8181-7aeaf654817a" Nov 8 00:23:03.943713 systemd[1]: Started sshd@14-24.199.105.232:22-139.178.68.195:59032.service - OpenSSH per-connection server daemon (139.178.68.195:59032). Nov 8 00:23:04.035518 sshd[5311]: Accepted publickey for core from 139.178.68.195 port 59032 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:23:04.038499 sshd[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:04.046776 systemd-logind[1444]: New session 14 of user core. Nov 8 00:23:04.053665 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:23:04.227655 sshd[5311]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:04.239553 systemd[1]: sshd@14-24.199.105.232:22-139.178.68.195:59032.service: Deactivated successfully. Nov 8 00:23:04.242350 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:23:04.244404 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:23:04.251757 systemd[1]: Started sshd@15-24.199.105.232:22-139.178.68.195:59042.service - OpenSSH per-connection server daemon (139.178.68.195:59042). Nov 8 00:23:04.254740 systemd-logind[1444]: Removed session 14. Nov 8 00:23:04.308245 sshd[5324]: Accepted publickey for core from 139.178.68.195 port 59042 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:23:04.309738 sshd[5324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:04.316266 systemd-logind[1444]: New session 15 of user core. Nov 8 00:23:04.323659 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:23:04.854441 sshd[5324]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:04.866792 systemd[1]: sshd@15-24.199.105.232:22-139.178.68.195:59042.service: Deactivated successfully. Nov 8 00:23:04.874157 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:23:04.878280 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:23:04.892072 systemd[1]: Started sshd@16-24.199.105.232:22-139.178.68.195:59046.service - OpenSSH per-connection server daemon (139.178.68.195:59046). Nov 8 00:23:04.903776 systemd-logind[1444]: Removed session 15. Nov 8 00:23:05.016447 sshd[5343]: Accepted publickey for core from 139.178.68.195 port 59046 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:23:05.022001 sshd[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:05.033488 systemd-logind[1444]: New session 16 of user core. Nov 8 00:23:05.039797 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:23:05.131635 kubelet[2509]: E1108 00:23:05.130897 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:23:05.366512 kubelet[2509]: E1108 00:23:05.366462 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7jxzl" podUID="3fc490ed-6d34-41fd-bb44-ba621857b51e" Nov 8 00:23:06.090466 sshd[5343]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:06.109097 systemd[1]: sshd@16-24.199.105.232:22-139.178.68.195:59046.service: Deactivated successfully. Nov 8 00:23:06.114026 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:23:06.116347 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:23:06.131028 systemd[1]: Started sshd@17-24.199.105.232:22-139.178.68.195:59054.service - OpenSSH per-connection server daemon (139.178.68.195:59054). Nov 8 00:23:06.135622 systemd-logind[1444]: Removed session 16. Nov 8 00:23:06.221130 sshd[5382]: Accepted publickey for core from 139.178.68.195 port 59054 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:23:06.223866 sshd[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:06.232184 systemd-logind[1444]: New session 17 of user core. Nov 8 00:23:06.239597 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:23:06.863719 sshd[5382]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:06.883693 systemd[1]: sshd@17-24.199.105.232:22-139.178.68.195:59054.service: Deactivated successfully. Nov 8 00:23:06.890807 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:23:06.894790 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:23:06.907818 systemd[1]: Started sshd@18-24.199.105.232:22-139.178.68.195:59064.service - OpenSSH per-connection server daemon (139.178.68.195:59064). Nov 8 00:23:06.912439 systemd-logind[1444]: Removed session 17. Nov 8 00:23:06.971782 sshd[5394]: Accepted publickey for core from 139.178.68.195 port 59064 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:23:06.975597 sshd[5394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:06.984483 systemd-logind[1444]: New session 18 of user core. Nov 8 00:23:06.990567 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:23:07.207438 sshd[5394]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:07.215657 systemd[1]: sshd@18-24.199.105.232:22-139.178.68.195:59064.service: Deactivated successfully. Nov 8 00:23:07.222142 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:23:07.226507 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:23:07.229420 systemd-logind[1444]: Removed session 18. Nov 8 00:23:07.368489 kubelet[2509]: E1108 00:23:07.367305 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58dd75b54-s7bcs" podUID="31b491a3-55cf-4c4e-922c-621192b0de8f" Nov 8 00:23:07.370159 kubelet[2509]: E1108 00:23:07.367382 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c57565dbd-rlbqk" podUID="560dd5bf-b92f-472c-9028-b374dabf58bb" Nov 8 00:23:12.233108 systemd[1]: Started sshd@19-24.199.105.232:22-139.178.68.195:59074.service - OpenSSH per-connection server daemon (139.178.68.195:59074). Nov 8 00:23:12.305109 sshd[5410]: Accepted publickey for core from 139.178.68.195 port 59074 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:23:12.304774 sshd[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:12.323055 systemd-logind[1444]: New session 19 of user core. Nov 8 00:23:12.330750 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:23:12.367465 containerd[1469]: time="2025-11-08T00:23:12.367341287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:23:12.586710 sshd[5410]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:12.594017 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:23:12.594811 systemd[1]: sshd@19-24.199.105.232:22-139.178.68.195:59074.service: Deactivated successfully. Nov 8 00:23:12.599464 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:23:12.608901 systemd-logind[1444]: Removed session 19. Nov 8 00:23:12.752836 containerd[1469]: time="2025-11-08T00:23:12.752725687Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:23:12.754401 containerd[1469]: time="2025-11-08T00:23:12.754275367Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:23:12.754862 containerd[1469]: time="2025-11-08T00:23:12.754338509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:23:12.755305 kubelet[2509]: E1108 00:23:12.755148 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:23:12.755305 kubelet[2509]: E1108 00:23:12.755286 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:23:12.755891 kubelet[2509]: E1108 00:23:12.755484 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cfa83883e8f7431cbc54801fd68dfa44,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h9ck8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-97778988b-6hzb4_calico-system(6e6990fb-3126-46c4-96c6-a63ad2a68c21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:23:12.759773 containerd[1469]: time="2025-11-08T00:23:12.759647323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:23:13.102766 containerd[1469]: time="2025-11-08T00:23:13.102564078Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:23:13.103906 containerd[1469]: time="2025-11-08T00:23:13.103795867Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:23:13.103906 containerd[1469]: time="2025-11-08T00:23:13.103962815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:23:13.104505 kubelet[2509]: E1108 00:23:13.104302 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:23:13.104505 kubelet[2509]: E1108 00:23:13.104386 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:23:13.105974 kubelet[2509]: E1108 00:23:13.104551 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h9ck8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-97778988b-6hzb4_calico-system(6e6990fb-3126-46c4-96c6-a63ad2a68c21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:23:13.106588 kubelet[2509]: E1108 00:23:13.106399 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-97778988b-6hzb4" podUID="6e6990fb-3126-46c4-96c6-a63ad2a68c21" Nov 8 00:23:14.370067 kubelet[2509]: E1108 00:23:14.369510 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58dd75b54-57vp2" podUID="42409f77-f298-4938-9e62-f71427e3d95e" Nov 8 00:23:15.366268 kubelet[2509]: E1108 00:23:15.364063 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:23:16.370709 containerd[1469]: time="2025-11-08T00:23:16.370472568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:23:16.752053 containerd[1469]: time="2025-11-08T00:23:16.750148165Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:23:16.752053 containerd[1469]: time="2025-11-08T00:23:16.751448236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:23:16.752053 containerd[1469]: time="2025-11-08T00:23:16.751521875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:23:16.752570 kubelet[2509]: E1108 00:23:16.751801 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:23:16.752570 kubelet[2509]: E1108 00:23:16.751883 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:23:16.752570 kubelet[2509]: E1108 00:23:16.752110 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4l7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7q5q2_calico-system(daa9f29d-2835-4e9f-8181-7aeaf654817a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:23:16.755821 containerd[1469]: time="2025-11-08T00:23:16.755390791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:23:17.109098 containerd[1469]: time="2025-11-08T00:23:17.108823683Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:23:17.110747 containerd[1469]: time="2025-11-08T00:23:17.110503686Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:23:17.110747 containerd[1469]: time="2025-11-08T00:23:17.110548714Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:23:17.111891 kubelet[2509]: E1108 00:23:17.110970 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:23:17.111891 kubelet[2509]: E1108 00:23:17.111043 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:23:17.111891 kubelet[2509]: E1108 00:23:17.111231 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4l7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7q5q2_calico-system(daa9f29d-2835-4e9f-8181-7aeaf654817a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:23:17.112630 kubelet[2509]: E1108 00:23:17.112530 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7q5q2" podUID="daa9f29d-2835-4e9f-8181-7aeaf654817a" Nov 8 00:23:17.613904 systemd[1]: Started sshd@20-24.199.105.232:22-139.178.68.195:43206.service - OpenSSH per-connection server daemon (139.178.68.195:43206). Nov 8 00:23:17.821811 sshd[5423]: Accepted publickey for core from 139.178.68.195 port 43206 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:23:17.825832 sshd[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:17.838683 systemd-logind[1444]: New session 20 of user core. Nov 8 00:23:17.844760 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:23:18.379513 containerd[1469]: time="2025-11-08T00:23:18.376610325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:23:18.716798 containerd[1469]: time="2025-11-08T00:23:18.715370071Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:23:18.716798 containerd[1469]: time="2025-11-08T00:23:18.716565998Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:23:18.716798 containerd[1469]: time="2025-11-08T00:23:18.716711073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:23:18.718651 kubelet[2509]: E1108 00:23:18.717398 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:23:18.718651 kubelet[2509]: E1108 00:23:18.717486 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:23:18.718651 kubelet[2509]: E1108 00:23:18.717831 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jk5pt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7jxzl_calico-system(3fc490ed-6d34-41fd-bb44-ba621857b51e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:23:18.719782 containerd[1469]: time="2025-11-08T00:23:18.718281669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:23:18.720140 kubelet[2509]: E1108 00:23:18.719982 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7jxzl" podUID="3fc490ed-6d34-41fd-bb44-ba621857b51e" Nov 8 00:23:18.862793 sshd[5423]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:18.871548 systemd[1]: sshd@20-24.199.105.232:22-139.178.68.195:43206.service: Deactivated successfully. Nov 8 00:23:18.876320 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:23:18.878331 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:23:18.881330 systemd-logind[1444]: Removed session 20. Nov 8 00:23:19.073085 containerd[1469]: time="2025-11-08T00:23:19.072672234Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:23:19.074898 containerd[1469]: time="2025-11-08T00:23:19.074786467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:23:19.075820 containerd[1469]: time="2025-11-08T00:23:19.074827237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:23:19.076009 kubelet[2509]: E1108 00:23:19.075281 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:23:19.076009 kubelet[2509]: E1108 00:23:19.075364 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:23:19.076009 kubelet[2509]: E1108 00:23:19.075582 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48n5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c57565dbd-rlbqk_calico-system(560dd5bf-b92f-472c-9028-b374dabf58bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:23:19.078850 kubelet[2509]: E1108 00:23:19.078575 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c57565dbd-rlbqk" podUID="560dd5bf-b92f-472c-9028-b374dabf58bb" Nov 8 00:23:19.368660 kubelet[2509]: E1108 00:23:19.367755 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:23:21.363809 kubelet[2509]: E1108 00:23:21.363752 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 8 00:23:22.382068 containerd[1469]: time="2025-11-08T00:23:22.378811184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:23:22.724633 containerd[1469]: time="2025-11-08T00:23:22.723258051Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:23:22.727145 containerd[1469]: time="2025-11-08T00:23:22.726035157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:23:22.727145 containerd[1469]: time="2025-11-08T00:23:22.726165131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:23:22.727449 kubelet[2509]: E1108 00:23:22.727366 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:23:22.727998 kubelet[2509]: E1108 00:23:22.727456 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:23:22.727998 kubelet[2509]: E1108 00:23:22.727855 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp8zh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58dd75b54-s7bcs_calico-apiserver(31b491a3-55cf-4c4e-922c-621192b0de8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:23:22.730427 kubelet[2509]: E1108 00:23:22.729993 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58dd75b54-s7bcs" podUID="31b491a3-55cf-4c4e-922c-621192b0de8f" Nov 8 00:23:23.888159 systemd[1]: Started sshd@21-24.199.105.232:22-139.178.68.195:37808.service - OpenSSH per-connection server daemon (139.178.68.195:37808). Nov 8 00:23:23.994355 sshd[5438]: Accepted publickey for core from 139.178.68.195 port 37808 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:23:23.996856 sshd[5438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:24.007675 systemd-logind[1444]: New session 21 of user core. Nov 8 00:23:24.013755 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:23:24.349698 sshd[5438]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:24.367009 systemd[1]: sshd@21-24.199.105.232:22-139.178.68.195:37808.service: Deactivated successfully. Nov 8 00:23:24.373518 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:23:24.377292 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:23:24.382869 systemd-logind[1444]: Removed session 21. Nov 8 00:23:24.383772 kubelet[2509]: E1108 00:23:24.383714 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-97778988b-6hzb4" podUID="6e6990fb-3126-46c4-96c6-a63ad2a68c21" Nov 8 00:23:26.370114 containerd[1469]: time="2025-11-08T00:23:26.369959579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:23:26.710601 containerd[1469]: time="2025-11-08T00:23:26.710358549Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:23:26.712103 containerd[1469]: time="2025-11-08T00:23:26.711961917Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:23:26.712297 containerd[1469]: time="2025-11-08T00:23:26.712005174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:23:26.712662 kubelet[2509]: E1108 00:23:26.712541 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:23:26.712662 kubelet[2509]: E1108 00:23:26.712635 2509 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:23:26.713589 kubelet[2509]: E1108 00:23:26.712889 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2ncw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58dd75b54-57vp2_calico-apiserver(42409f77-f298-4938-9e62-f71427e3d95e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:23:26.716066 kubelet[2509]: E1108 00:23:26.715935 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58dd75b54-57vp2" podUID="42409f77-f298-4938-9e62-f71427e3d95e"