Dec 12 18:33:32.881659 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 12 18:33:32.881696 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:33:32.881714 kernel: BIOS-provided physical RAM map: Dec 12 18:33:32.881723 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 12 18:33:32.881732 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 12 18:33:32.881741 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 12 18:33:32.881753 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Dec 12 18:33:32.881769 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Dec 12 18:33:32.881779 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:33:32.881788 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 12 18:33:32.881798 kernel: NX (Execute Disable) protection: active Dec 12 18:33:32.881811 kernel: APIC: Static calls initialized Dec 12 18:33:32.881821 kernel: SMBIOS 2.8 present. Dec 12 18:33:32.881832 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Dec 12 18:33:32.881845 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:33:32.881857 kernel: Hypervisor detected: KVM Dec 12 18:33:32.881877 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 12 18:33:32.881890 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:33:32.881902 kernel: kvm-clock: using sched offset of 4932152123 cycles Dec 12 18:33:32.881915 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:33:32.881927 kernel: tsc: Detected 2494.140 MHz processor Dec 12 18:33:32.881940 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:33:32.881952 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:33:32.881963 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 12 18:33:32.881975 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 12 18:33:32.881987 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:33:32.882004 kernel: ACPI: Early table checksum verification disabled Dec 12 18:33:32.882015 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Dec 12 18:33:32.883142 kernel: ACPI: RSDT 0x000000007FFE19FD 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:33:32.883161 kernel: ACPI: FACP 0x000000007FFE17E1 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:33:32.883170 kernel: ACPI: DSDT 0x000000007FFE0040 0017A1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:33:32.883179 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 12 18:33:32.883187 kernel: ACPI: APIC 0x000000007FFE1855 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:33:32.883195 kernel: ACPI: HPET 0x000000007FFE18D5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:33:32.883209 kernel: ACPI: SRAT 0x000000007FFE190D 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:33:32.883217 kernel: ACPI: WAET 0x000000007FFE19D5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:33:32.883225 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe17e1-0x7ffe1854] Dec 12 18:33:32.883233 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe17e0] Dec 12 18:33:32.883241 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 12 18:33:32.883249 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe1855-0x7ffe18d4] Dec 12 18:33:32.883262 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe18d5-0x7ffe190c] Dec 12 18:33:32.883273 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe190d-0x7ffe19d4] Dec 12 18:33:32.883281 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe19d5-0x7ffe19fc] Dec 12 18:33:32.883289 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 12 18:33:32.883298 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 12 18:33:32.883306 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Dec 12 18:33:32.883315 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Dec 12 18:33:32.883324 kernel: Zone ranges: Dec 12 18:33:32.883332 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:33:32.883343 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Dec 12 18:33:32.883351 kernel: Normal empty Dec 12 18:33:32.883359 kernel: Device empty Dec 12 18:33:32.883367 kernel: Movable zone start for each node Dec 12 18:33:32.883376 kernel: Early memory node ranges Dec 12 18:33:32.883384 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 12 18:33:32.883392 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Dec 12 18:33:32.883400 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Dec 12 18:33:32.883408 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:33:32.883419 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 12 18:33:32.883428 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Dec 12 18:33:32.883436 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 12 18:33:32.883449 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:33:32.883457 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:33:32.883468 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 12 18:33:32.883476 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:33:32.883485 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:33:32.883495 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:33:32.883507 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:33:32.883516 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:33:32.883524 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 12 18:33:32.883532 kernel: TSC deadline timer available Dec 12 18:33:32.883540 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:33:32.883548 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:33:32.883556 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:33:32.883564 kernel: CPU topo: Max. threads per core: 1 Dec 12 18:33:32.883572 kernel: CPU topo: Num. cores per package: 2 Dec 12 18:33:32.883580 kernel: CPU topo: Num. threads per package: 2 Dec 12 18:33:32.883591 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 12 18:33:32.883599 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 18:33:32.883607 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 12 18:33:32.883616 kernel: Booting paravirtualized kernel on KVM Dec 12 18:33:32.883624 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:33:32.883632 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 12 18:33:32.883641 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 12 18:33:32.883649 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 12 18:33:32.883657 kernel: pcpu-alloc: [0] 0 1 Dec 12 18:33:32.883668 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 12 18:33:32.883677 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:33:32.883686 kernel: random: crng init done Dec 12 18:33:32.883694 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 18:33:32.883703 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 12 18:33:32.883711 kernel: Fallback order for Node 0: 0 Dec 12 18:33:32.883719 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Dec 12 18:33:32.883727 kernel: Policy zone: DMA32 Dec 12 18:33:32.883738 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:33:32.883746 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 18:33:32.883754 kernel: Kernel/User page tables isolation: enabled Dec 12 18:33:32.883762 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:33:32.883771 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:33:32.883779 kernel: Dynamic Preempt: voluntary Dec 12 18:33:32.883787 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:33:32.883796 kernel: rcu: RCU event tracing is enabled. Dec 12 18:33:32.883805 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 18:33:32.883816 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:33:32.883825 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:33:32.883833 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:33:32.883841 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:33:32.883849 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 18:33:32.883857 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:33:32.883868 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:33:32.883877 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:33:32.883885 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 12 18:33:32.883896 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:33:32.883904 kernel: Console: colour VGA+ 80x25 Dec 12 18:33:32.883912 kernel: printk: legacy console [tty0] enabled Dec 12 18:33:32.883920 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:33:32.883928 kernel: ACPI: Core revision 20240827 Dec 12 18:33:32.883936 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 12 18:33:32.883954 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:33:32.883965 kernel: x2apic enabled Dec 12 18:33:32.883973 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:33:32.883982 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 12 18:33:32.883991 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Dec 12 18:33:32.884001 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Dec 12 18:33:32.884012 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 12 18:33:32.884021 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 12 18:33:32.884041 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:33:32.884050 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:33:32.884062 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:33:32.884072 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 12 18:33:32.884083 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 18:33:32.884092 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 18:33:32.884100 kernel: MDS: Mitigation: Clear CPU buffers Dec 12 18:33:32.884109 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 12 18:33:32.884117 kernel: active return thunk: its_return_thunk Dec 12 18:33:32.884126 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 12 18:33:32.884135 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:33:32.884147 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:33:32.884155 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:33:32.884164 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:33:32.884173 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 12 18:33:32.884181 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:33:32.884190 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:33:32.884198 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:33:32.884211 kernel: landlock: Up and running. Dec 12 18:33:32.884224 kernel: SELinux: Initializing. Dec 12 18:33:32.884241 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 12 18:33:32.884259 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 12 18:33:32.884272 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Dec 12 18:33:32.884285 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Dec 12 18:33:32.884294 kernel: signal: max sigframe size: 1776 Dec 12 18:33:32.884308 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:33:32.884321 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:33:32.884334 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:33:32.884348 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 12 18:33:32.884367 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:33:32.884385 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:33:32.884398 kernel: .... node #0, CPUs: #1 Dec 12 18:33:32.884410 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 18:33:32.884423 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Dec 12 18:33:32.884437 kernel: Memory: 1958716K/2096612K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 133332K reserved, 0K cma-reserved) Dec 12 18:33:32.884446 kernel: devtmpfs: initialized Dec 12 18:33:32.884455 kernel: x86/mm: Memory block size: 128MB Dec 12 18:33:32.884464 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:33:32.884476 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 18:33:32.884485 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:33:32.884494 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:33:32.884502 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:33:32.884511 kernel: audit: type=2000 audit(1765564409.729:1): state=initialized audit_enabled=0 res=1 Dec 12 18:33:32.884520 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:33:32.884529 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:33:32.884537 kernel: cpuidle: using governor menu Dec 12 18:33:32.884546 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:33:32.884558 kernel: dca service started, version 1.12.1 Dec 12 18:33:32.884566 kernel: PCI: Using configuration type 1 for base access Dec 12 18:33:32.884575 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:33:32.884584 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:33:32.884592 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:33:32.884601 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:33:32.884610 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:33:32.884618 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:33:32.884627 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 18:33:32.884638 kernel: ACPI: Interpreter enabled Dec 12 18:33:32.884647 kernel: ACPI: PM: (supports S0 S5) Dec 12 18:33:32.884655 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:33:32.884664 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:33:32.884673 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 18:33:32.884681 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 12 18:33:32.884690 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:33:32.884913 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:33:32.888127 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 12 18:33:32.888290 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 12 18:33:32.888305 kernel: acpiphp: Slot [3] registered Dec 12 18:33:32.888315 kernel: acpiphp: Slot [4] registered Dec 12 18:33:32.888324 kernel: acpiphp: Slot [5] registered Dec 12 18:33:32.888333 kernel: acpiphp: Slot [6] registered Dec 12 18:33:32.888342 kernel: acpiphp: Slot [7] registered Dec 12 18:33:32.888350 kernel: acpiphp: Slot [8] registered Dec 12 18:33:32.888366 kernel: acpiphp: Slot [9] registered Dec 12 18:33:32.888374 kernel: acpiphp: Slot [10] registered Dec 12 18:33:32.888383 kernel: acpiphp: Slot [11] registered Dec 12 18:33:32.888393 kernel: acpiphp: Slot [12] registered Dec 12 18:33:32.888402 kernel: acpiphp: Slot [13] registered Dec 12 18:33:32.888411 kernel: acpiphp: Slot [14] registered Dec 12 18:33:32.888419 kernel: acpiphp: Slot [15] registered Dec 12 18:33:32.888428 kernel: acpiphp: Slot [16] registered Dec 12 18:33:32.888437 kernel: acpiphp: Slot [17] registered Dec 12 18:33:32.888445 kernel: acpiphp: Slot [18] registered Dec 12 18:33:32.888457 kernel: acpiphp: Slot [19] registered Dec 12 18:33:32.888466 kernel: acpiphp: Slot [20] registered Dec 12 18:33:32.888475 kernel: acpiphp: Slot [21] registered Dec 12 18:33:32.888483 kernel: acpiphp: Slot [22] registered Dec 12 18:33:32.888492 kernel: acpiphp: Slot [23] registered Dec 12 18:33:32.888501 kernel: acpiphp: Slot [24] registered Dec 12 18:33:32.888509 kernel: acpiphp: Slot [25] registered Dec 12 18:33:32.888518 kernel: acpiphp: Slot [26] registered Dec 12 18:33:32.888527 kernel: acpiphp: Slot [27] registered Dec 12 18:33:32.888538 kernel: acpiphp: Slot [28] registered Dec 12 18:33:32.888547 kernel: acpiphp: Slot [29] registered Dec 12 18:33:32.888556 kernel: acpiphp: Slot [30] registered Dec 12 18:33:32.888565 kernel: acpiphp: Slot [31] registered Dec 12 18:33:32.888573 kernel: PCI host bridge to bus 0000:00 Dec 12 18:33:32.888725 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:33:32.888815 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:33:32.888900 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:33:32.888990 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 12 18:33:32.889104 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 12 18:33:32.889187 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:33:32.889316 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:33:32.889430 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:33:32.889540 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Dec 12 18:33:32.889642 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Dec 12 18:33:32.889735 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Dec 12 18:33:32.889827 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Dec 12 18:33:32.889925 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Dec 12 18:33:32.891486 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Dec 12 18:33:32.891664 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Dec 12 18:33:32.891777 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Dec 12 18:33:32.891893 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Dec 12 18:33:32.891999 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 12 18:33:32.892104 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 12 18:33:32.892226 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Dec 12 18:33:32.892350 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Dec 12 18:33:32.892449 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Dec 12 18:33:32.892548 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Dec 12 18:33:32.892643 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Dec 12 18:33:32.892737 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 18:33:32.892852 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:33:32.892950 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Dec 12 18:33:32.895136 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Dec 12 18:33:32.895278 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Dec 12 18:33:32.895413 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:33:32.895538 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Dec 12 18:33:32.895673 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Dec 12 18:33:32.895771 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 12 18:33:32.895880 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:33:32.895975 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Dec 12 18:33:32.896081 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Dec 12 18:33:32.896181 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 12 18:33:32.896287 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:33:32.896380 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Dec 12 18:33:32.896470 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Dec 12 18:33:32.896561 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Dec 12 18:33:32.896695 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:33:32.896796 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Dec 12 18:33:32.896893 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Dec 12 18:33:32.896984 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Dec 12 18:33:32.898540 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 18:33:32.898658 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Dec 12 18:33:32.898754 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Dec 12 18:33:32.898767 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:33:32.898783 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:33:32.898792 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:33:32.898805 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:33:32.898814 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 12 18:33:32.898824 kernel: iommu: Default domain type: Translated Dec 12 18:33:32.898833 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:33:32.898842 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:33:32.898851 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:33:32.898860 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 12 18:33:32.898872 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Dec 12 18:33:32.898984 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 12 18:33:32.899120 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 12 18:33:32.899214 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 18:33:32.899225 kernel: vgaarb: loaded Dec 12 18:33:32.899234 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 12 18:33:32.899243 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 12 18:33:32.899252 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:33:32.899261 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:33:32.899275 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:33:32.899284 kernel: pnp: PnP ACPI init Dec 12 18:33:32.899293 kernel: pnp: PnP ACPI: found 4 devices Dec 12 18:33:32.899302 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:33:32.899311 kernel: NET: Registered PF_INET protocol family Dec 12 18:33:32.899319 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:33:32.899328 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 12 18:33:32.899337 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:33:32.899346 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 12 18:33:32.899357 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 12 18:33:32.899366 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 12 18:33:32.899375 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 12 18:33:32.899384 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 12 18:33:32.899392 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:33:32.899401 kernel: NET: Registered PF_XDP protocol family Dec 12 18:33:32.899493 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:33:32.899578 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:33:32.899668 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:33:32.899755 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 12 18:33:32.899910 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 12 18:33:32.900012 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 12 18:33:32.900124 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 12 18:33:32.900138 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 12 18:33:32.900236 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 28036 usecs Dec 12 18:33:32.900249 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:33:32.900263 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 12 18:33:32.900273 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Dec 12 18:33:32.900282 kernel: Initialise system trusted keyrings Dec 12 18:33:32.900292 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 12 18:33:32.900301 kernel: Key type asymmetric registered Dec 12 18:33:32.900310 kernel: Asymmetric key parser 'x509' registered Dec 12 18:33:32.900319 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:33:32.900328 kernel: io scheduler mq-deadline registered Dec 12 18:33:32.900337 kernel: io scheduler kyber registered Dec 12 18:33:32.900350 kernel: io scheduler bfq registered Dec 12 18:33:32.900359 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:33:32.900369 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 12 18:33:32.900378 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 12 18:33:32.900386 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 12 18:33:32.900395 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:33:32.900404 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:33:32.900413 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:33:32.900422 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:33:32.900433 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:33:32.900442 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 18:33:32.900579 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 12 18:33:32.900673 kernel: rtc_cmos 00:03: registered as rtc0 Dec 12 18:33:32.900776 kernel: rtc_cmos 00:03: setting system clock to 2025-12-12T18:33:32 UTC (1765564412) Dec 12 18:33:32.900866 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 12 18:33:32.900878 kernel: intel_pstate: CPU model not supported Dec 12 18:33:32.900887 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:33:32.900901 kernel: Segment Routing with IPv6 Dec 12 18:33:32.900910 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:33:32.900919 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:33:32.900928 kernel: Key type dns_resolver registered Dec 12 18:33:32.900937 kernel: IPI shorthand broadcast: enabled Dec 12 18:33:32.900946 kernel: sched_clock: Marking stable (3169049231, 156717472)->(3471300361, -145533658) Dec 12 18:33:32.900955 kernel: registered taskstats version 1 Dec 12 18:33:32.900964 kernel: Loading compiled-in X.509 certificates Dec 12 18:33:32.900973 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 12 18:33:32.900985 kernel: Demotion targets for Node 0: null Dec 12 18:33:32.900993 kernel: Key type .fscrypt registered Dec 12 18:33:32.901002 kernel: Key type fscrypt-provisioning registered Dec 12 18:33:32.901039 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:33:32.901066 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:33:32.901075 kernel: ima: No architecture policies found Dec 12 18:33:32.901084 kernel: clk: Disabling unused clocks Dec 12 18:33:32.901094 kernel: Warning: unable to open an initial console. Dec 12 18:33:32.901103 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 12 18:33:32.901115 kernel: Write protecting the kernel read-only data: 40960k Dec 12 18:33:32.901125 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 12 18:33:32.901134 kernel: Run /init as init process Dec 12 18:33:32.901143 kernel: with arguments: Dec 12 18:33:32.901153 kernel: /init Dec 12 18:33:32.901162 kernel: with environment: Dec 12 18:33:32.901171 kernel: HOME=/ Dec 12 18:33:32.901179 kernel: TERM=linux Dec 12 18:33:32.901190 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:33:32.901207 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:33:32.901217 systemd[1]: Detected virtualization kvm. Dec 12 18:33:32.901227 systemd[1]: Detected architecture x86-64. Dec 12 18:33:32.901236 systemd[1]: Running in initrd. Dec 12 18:33:32.901245 systemd[1]: No hostname configured, using default hostname. Dec 12 18:33:32.901255 systemd[1]: Hostname set to . Dec 12 18:33:32.901265 systemd[1]: Initializing machine ID from VM UUID. Dec 12 18:33:32.901278 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:33:32.901288 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:33:32.901297 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:33:32.901307 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:33:32.901317 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:33:32.901327 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:33:32.901341 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:33:32.901352 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 18:33:32.901362 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 18:33:32.901371 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:33:32.901381 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:33:32.901391 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:33:32.901403 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:33:32.901413 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:33:32.901423 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:33:32.901433 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:33:32.901443 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:33:32.901452 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:33:32.901462 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:33:32.901473 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:33:32.901482 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:33:32.901494 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:33:32.901504 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:33:32.901513 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:33:32.901523 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:33:32.901532 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:33:32.901542 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:33:32.901552 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:33:32.901562 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:33:32.901575 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:33:32.901584 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:33:32.901594 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:33:32.901605 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:33:32.901615 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:33:32.901628 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:33:32.901674 systemd-journald[191]: Collecting audit messages is disabled. Dec 12 18:33:32.901698 systemd-journald[191]: Journal started Dec 12 18:33:32.901724 systemd-journald[191]: Runtime Journal (/run/log/journal/adced3ed169d4e479fb9b8e9fbaebd3d) is 4.9M, max 39.2M, 34.3M free. Dec 12 18:33:32.866592 systemd-modules-load[193]: Inserted module 'overlay' Dec 12 18:33:32.905136 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:33:32.907204 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:33:32.970042 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:33:32.970078 kernel: Bridge firewalling registered Dec 12 18:33:32.911765 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:33:32.923360 systemd-modules-load[193]: Inserted module 'br_netfilter' Dec 12 18:33:32.971693 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:33:32.976229 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:33:32.981164 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:33:32.983011 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:33:32.985005 systemd-tmpfiles[205]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:33:32.986422 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:33:32.994232 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:33:33.008126 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:33:33.012005 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:33:33.017196 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:33:33.018660 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:33:33.021022 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:33:33.051220 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:33:33.069211 systemd-resolved[232]: Positive Trust Anchors: Dec 12 18:33:33.069224 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:33:33.069264 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:33:33.075415 systemd-resolved[232]: Defaulting to hostname 'linux'. Dec 12 18:33:33.077964 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:33:33.078744 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:33:33.155081 kernel: SCSI subsystem initialized Dec 12 18:33:33.166063 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:33:33.177066 kernel: iscsi: registered transport (tcp) Dec 12 18:33:33.209627 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:33:33.209704 kernel: QLogic iSCSI HBA Driver Dec 12 18:33:33.235176 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:33:33.264390 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:33:33.265430 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:33:33.321822 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:33:33.325191 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:33:33.389119 kernel: raid6: avx2x4 gen() 17236 MB/s Dec 12 18:33:33.404102 kernel: raid6: avx2x2 gen() 17718 MB/s Dec 12 18:33:33.421183 kernel: raid6: avx2x1 gen() 12999 MB/s Dec 12 18:33:33.421282 kernel: raid6: using algorithm avx2x2 gen() 17718 MB/s Dec 12 18:33:33.439314 kernel: raid6: .... xor() 19751 MB/s, rmw enabled Dec 12 18:33:33.439413 kernel: raid6: using avx2x2 recovery algorithm Dec 12 18:33:33.462069 kernel: xor: automatically using best checksumming function avx Dec 12 18:33:33.675112 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:33:33.682587 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:33:33.685275 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:33:33.712250 systemd-udevd[442]: Using default interface naming scheme 'v255'. Dec 12 18:33:33.721337 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:33:33.725551 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:33:33.754403 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Dec 12 18:33:33.785728 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:33:33.788552 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:33:33.859495 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:33:33.861937 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:33:33.951555 kernel: libata version 3.00 loaded. Dec 12 18:33:33.959097 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Dec 12 18:33:33.967169 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 12 18:33:33.967525 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 12 18:33:34.000066 kernel: scsi host0: ata_piix Dec 12 18:33:34.024551 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:33:34.024582 kernel: GPT:9289727 != 125829119 Dec 12 18:33:34.024600 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:33:34.024619 kernel: GPT:9289727 != 125829119 Dec 12 18:33:34.024637 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:33:34.024655 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:33:34.024685 kernel: scsi host1: ata_piix Dec 12 18:33:34.024939 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Dec 12 18:33:34.025056 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Dec 12 18:33:34.033117 kernel: ACPI: bus type USB registered Dec 12 18:33:34.033202 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:33:34.035472 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Dec 12 18:33:34.041067 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Dec 12 18:33:34.053789 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Dec 12 18:33:34.056049 kernel: usbcore: registered new interface driver usbfs Dec 12 18:33:34.056119 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 12 18:33:34.069106 kernel: usbcore: registered new interface driver hub Dec 12 18:33:34.069184 kernel: scsi host2: Virtio SCSI HBA Dec 12 18:33:34.076099 kernel: usbcore: registered new device driver usb Dec 12 18:33:34.088699 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:33:34.089983 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:33:34.091771 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:33:34.094550 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:33:34.096647 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:33:34.189632 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:33:34.213092 kernel: AES CTR mode by8 optimization enabled Dec 12 18:33:34.314436 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 18:33:34.326314 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 12 18:33:34.326556 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 12 18:33:34.326737 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 12 18:33:34.326917 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Dec 12 18:33:34.327447 kernel: hub 1-0:1.0: USB hub found Dec 12 18:33:34.327671 kernel: hub 1-0:1.0: 2 ports detected Dec 12 18:33:34.337899 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 18:33:34.349216 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 18:33:34.350308 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:33:34.358856 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 12 18:33:34.359672 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 18:33:34.362335 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:33:34.363241 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:33:34.364615 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:33:34.367168 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:33:34.368504 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:33:34.388119 disk-uuid[592]: Primary Header is updated. Dec 12 18:33:34.388119 disk-uuid[592]: Secondary Entries is updated. Dec 12 18:33:34.388119 disk-uuid[592]: Secondary Header is updated. Dec 12 18:33:34.398868 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:33:34.403077 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:33:35.415169 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:33:35.415286 disk-uuid[594]: The operation has completed successfully. Dec 12 18:33:35.473174 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:33:35.473341 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:33:35.506220 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 18:33:35.537124 sh[611]: Success Dec 12 18:33:35.562392 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:33:35.562527 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:33:35.565042 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:33:35.576146 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Dec 12 18:33:35.630320 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:33:35.635160 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 18:33:35.650882 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 18:33:35.662077 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (623) Dec 12 18:33:35.665710 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 12 18:33:35.666018 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:33:35.672223 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:33:35.672310 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:33:35.675524 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 18:33:35.676623 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:33:35.677267 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:33:35.678070 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:33:35.681239 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:33:35.715684 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (656) Dec 12 18:33:35.718087 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:33:35.720093 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:33:35.726172 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:33:35.726258 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:33:35.734416 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:33:35.735409 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:33:35.739369 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:33:35.869437 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:33:35.886221 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:33:35.918659 ignition[701]: Ignition 2.22.0 Dec 12 18:33:35.919538 ignition[701]: Stage: fetch-offline Dec 12 18:33:35.919596 ignition[701]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:33:35.919606 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:33:35.923321 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:33:35.919712 ignition[701]: parsed url from cmdline: "" Dec 12 18:33:35.919716 ignition[701]: no config URL provided Dec 12 18:33:35.919721 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:33:35.919729 ignition[701]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:33:35.919734 ignition[701]: failed to fetch config: resource requires networking Dec 12 18:33:35.919913 ignition[701]: Ignition finished successfully Dec 12 18:33:35.936232 systemd-networkd[797]: lo: Link UP Dec 12 18:33:35.936246 systemd-networkd[797]: lo: Gained carrier Dec 12 18:33:35.939578 systemd-networkd[797]: Enumeration completed Dec 12 18:33:35.940116 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 12 18:33:35.940121 systemd-networkd[797]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Dec 12 18:33:35.940244 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:33:35.941884 systemd-networkd[797]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:33:35.941889 systemd-networkd[797]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:33:35.942020 systemd[1]: Reached target network.target - Network. Dec 12 18:33:35.942939 systemd-networkd[797]: eth0: Link UP Dec 12 18:33:35.943594 systemd-networkd[797]: eth1: Link UP Dec 12 18:33:35.944217 systemd-networkd[797]: eth0: Gained carrier Dec 12 18:33:35.944231 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 12 18:33:35.945605 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 18:33:35.950659 systemd-networkd[797]: eth1: Gained carrier Dec 12 18:33:35.950680 systemd-networkd[797]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:33:35.963172 systemd-networkd[797]: eth0: DHCPv4 address 134.199.220.206/20, gateway 134.199.208.1 acquired from 169.254.169.253 Dec 12 18:33:35.970116 systemd-networkd[797]: eth1: DHCPv4 address 10.124.0.21/20 acquired from 169.254.169.253 Dec 12 18:33:35.987315 ignition[802]: Ignition 2.22.0 Dec 12 18:33:35.987346 ignition[802]: Stage: fetch Dec 12 18:33:35.987554 ignition[802]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:33:35.987568 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:33:35.987695 ignition[802]: parsed url from cmdline: "" Dec 12 18:33:35.987700 ignition[802]: no config URL provided Dec 12 18:33:35.987708 ignition[802]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:33:35.987719 ignition[802]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:33:35.987753 ignition[802]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Dec 12 18:33:36.024550 ignition[802]: GET result: OK Dec 12 18:33:36.024886 ignition[802]: parsing config with SHA512: aabc81bd71e2be23de314b8275b7ead90f4992417281f4a91edd8796c3bd746ec813996b6fa44f02e612cd976427b2bde34dff7d141be0ac06f9c0042731773a Dec 12 18:33:36.029611 unknown[802]: fetched base config from "system" Dec 12 18:33:36.029623 unknown[802]: fetched base config from "system" Dec 12 18:33:36.029951 ignition[802]: fetch: fetch complete Dec 12 18:33:36.029629 unknown[802]: fetched user config from "digitalocean" Dec 12 18:33:36.029956 ignition[802]: fetch: fetch passed Dec 12 18:33:36.032478 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 18:33:36.030021 ignition[802]: Ignition finished successfully Dec 12 18:33:36.034305 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:33:36.068838 ignition[809]: Ignition 2.22.0 Dec 12 18:33:36.069601 ignition[809]: Stage: kargs Dec 12 18:33:36.069769 ignition[809]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:33:36.069780 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:33:36.070835 ignition[809]: kargs: kargs passed Dec 12 18:33:36.072787 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:33:36.070890 ignition[809]: Ignition finished successfully Dec 12 18:33:36.076192 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:33:36.125722 ignition[815]: Ignition 2.22.0 Dec 12 18:33:36.126498 ignition[815]: Stage: disks Dec 12 18:33:36.127160 ignition[815]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:33:36.127672 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:33:36.130654 ignition[815]: disks: disks passed Dec 12 18:33:36.131187 ignition[815]: Ignition finished successfully Dec 12 18:33:36.133846 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:33:36.134620 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:33:36.135430 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:33:36.136500 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:33:36.137535 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:33:36.138427 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:33:36.140694 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:33:36.169711 systemd-fsck[823]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 18:33:36.173517 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:33:36.175763 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:33:36.300043 kernel: EXT4-fs (vda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 12 18:33:36.300738 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:33:36.301770 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:33:36.303735 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:33:36.305460 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:33:36.309182 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Dec 12 18:33:36.316161 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 12 18:33:36.316829 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:33:36.316919 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:33:36.325333 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (831) Dec 12 18:33:36.329280 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:33:36.329574 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:33:36.334327 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:33:36.339904 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:33:36.345302 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:33:36.345370 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:33:36.353194 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:33:36.401243 coreos-metadata[834]: Dec 12 18:33:36.401 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 12 18:33:36.413364 coreos-metadata[833]: Dec 12 18:33:36.412 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 12 18:33:36.416277 coreos-metadata[834]: Dec 12 18:33:36.414 INFO Fetch successful Dec 12 18:33:36.417606 initrd-setup-root[862]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:33:36.423486 coreos-metadata[834]: Dec 12 18:33:36.423 INFO wrote hostname ci-4459.2.2-8-48b4194eb4 to /sysroot/etc/hostname Dec 12 18:33:36.426054 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 12 18:33:36.428204 coreos-metadata[833]: Dec 12 18:33:36.425 INFO Fetch successful Dec 12 18:33:36.429936 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:33:36.436888 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Dec 12 18:33:36.437908 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Dec 12 18:33:36.439149 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:33:36.444336 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:33:36.555756 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:33:36.558078 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:33:36.560329 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:33:36.587230 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:33:36.608379 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:33:36.640533 ignition[953]: INFO : Ignition 2.22.0 Dec 12 18:33:36.643081 ignition[953]: INFO : Stage: mount Dec 12 18:33:36.643081 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:33:36.643081 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:33:36.646150 ignition[953]: INFO : mount: mount passed Dec 12 18:33:36.646150 ignition[953]: INFO : Ignition finished successfully Dec 12 18:33:36.647650 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:33:36.650721 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:33:36.661784 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:33:36.674915 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:33:36.703138 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (965) Dec 12 18:33:36.703268 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:33:36.705357 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:33:36.712073 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:33:36.712190 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:33:36.715229 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:33:36.770949 ignition[982]: INFO : Ignition 2.22.0 Dec 12 18:33:36.772289 ignition[982]: INFO : Stage: files Dec 12 18:33:36.772289 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:33:36.772289 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:33:36.774847 ignition[982]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:33:36.775842 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:33:36.775842 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:33:36.780829 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:33:36.781924 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:33:36.782843 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:33:36.782609 unknown[982]: wrote ssh authorized keys file for user: core Dec 12 18:33:36.784583 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:33:36.785543 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 12 18:33:36.822201 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:33:36.875178 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:33:36.876381 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:33:36.876381 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:33:36.876381 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:33:36.876381 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:33:36.876381 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:33:36.876381 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:33:36.876381 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:33:36.876381 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:33:36.887443 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:33:36.887443 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:33:36.887443 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:33:36.887443 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:33:36.887443 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:33:36.887443 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 12 18:33:37.273744 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 18:33:37.283256 systemd-networkd[797]: eth1: Gained IPv6LL Dec 12 18:33:37.730845 systemd-networkd[797]: eth0: Gained IPv6LL Dec 12 18:33:37.813389 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:33:37.813389 ignition[982]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 18:33:37.816521 ignition[982]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:33:37.818746 ignition[982]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:33:37.818746 ignition[982]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 18:33:37.818746 ignition[982]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:33:37.818746 ignition[982]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:33:37.818746 ignition[982]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:33:37.818746 ignition[982]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:33:37.818746 ignition[982]: INFO : files: files passed Dec 12 18:33:37.818746 ignition[982]: INFO : Ignition finished successfully Dec 12 18:33:37.824438 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:33:37.828972 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:33:37.831912 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:33:37.844478 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:33:37.850384 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:33:37.868008 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:33:37.868008 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:33:37.870770 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:33:37.873113 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:33:37.874764 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:33:37.877230 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:33:37.936552 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:33:37.936733 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:33:37.938186 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:33:37.938971 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:33:37.940066 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:33:37.941231 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:33:37.985521 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:33:37.989730 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:33:38.032308 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:33:38.033696 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:33:38.035018 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:33:38.035558 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:33:38.035775 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:33:38.038392 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:33:38.038923 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:33:38.039561 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:33:38.041228 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:33:38.042196 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:33:38.042884 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:33:38.043935 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:33:38.044886 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:33:38.045961 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:33:38.046778 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:33:38.047955 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:33:38.048678 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:33:38.048834 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:33:38.050010 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:33:38.051220 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:33:38.052018 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:33:38.052203 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:33:38.053294 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:33:38.053568 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:33:38.054590 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:33:38.054790 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:33:38.056368 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:33:38.056648 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:33:38.058095 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 12 18:33:38.058391 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 12 18:33:38.062186 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:33:38.063003 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:33:38.065321 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:33:38.069196 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:33:38.069790 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:33:38.070152 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:33:38.073384 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:33:38.075317 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:33:38.085860 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:33:38.088659 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:33:38.118485 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:33:38.121064 ignition[1035]: INFO : Ignition 2.22.0 Dec 12 18:33:38.121064 ignition[1035]: INFO : Stage: umount Dec 12 18:33:38.123381 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:33:38.123381 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:33:38.125089 ignition[1035]: INFO : umount: umount passed Dec 12 18:33:38.125745 ignition[1035]: INFO : Ignition finished successfully Dec 12 18:33:38.128536 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:33:38.128757 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:33:38.130310 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:33:38.130472 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:33:38.132568 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:33:38.132714 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:33:38.133770 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:33:38.133857 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:33:38.134758 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 18:33:38.134818 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 18:33:38.135881 systemd[1]: Stopped target network.target - Network. Dec 12 18:33:38.136790 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:33:38.136893 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:33:38.137692 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:33:38.138571 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:33:38.142162 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:33:38.143492 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:33:38.143987 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:33:38.144868 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:33:38.144931 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:33:38.145747 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:33:38.145810 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:33:38.146563 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:33:38.146665 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:33:38.147654 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:33:38.147723 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:33:38.148532 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:33:38.148616 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:33:38.149667 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:33:38.150626 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:33:38.159157 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:33:38.159311 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:33:38.164772 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 18:33:38.165980 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:33:38.166878 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:33:38.169491 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 18:33:38.170488 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:33:38.171367 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:33:38.171421 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:33:38.173480 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:33:38.177172 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:33:38.177305 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:33:38.178150 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:33:38.178256 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:33:38.180449 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:33:38.180552 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:33:38.182276 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:33:38.182365 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:33:38.184682 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:33:38.188307 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 18:33:38.188420 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:33:38.199749 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:33:38.200093 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:33:38.201800 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:33:38.201906 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:33:38.203351 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:33:38.203415 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:33:38.204874 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:33:38.204984 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:33:38.207653 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:33:38.207738 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:33:38.209878 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:33:38.209966 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:33:38.214393 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:33:38.215975 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:33:38.216662 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:33:38.218320 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:33:38.218401 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:33:38.220672 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:33:38.221218 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:33:38.224546 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 18:33:38.225336 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 18:33:38.225382 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:33:38.225898 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:33:38.226812 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:33:38.235393 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:33:38.235539 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:33:38.237402 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:33:38.239631 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:33:38.261895 systemd[1]: Switching root. Dec 12 18:33:38.308094 systemd-journald[191]: Journal stopped Dec 12 18:33:39.607078 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Dec 12 18:33:39.607207 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:33:39.607226 kernel: SELinux: policy capability open_perms=1 Dec 12 18:33:39.607238 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:33:39.607250 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:33:39.607262 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:33:39.607274 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:33:39.607297 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:33:39.607309 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:33:39.607321 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:33:39.607333 kernel: audit: type=1403 audit(1765564418.490:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 18:33:39.607350 systemd[1]: Successfully loaded SELinux policy in 74.526ms. Dec 12 18:33:39.607382 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.723ms. Dec 12 18:33:39.607397 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:33:39.607412 systemd[1]: Detected virtualization kvm. Dec 12 18:33:39.607428 systemd[1]: Detected architecture x86-64. Dec 12 18:33:39.607440 systemd[1]: Detected first boot. Dec 12 18:33:39.607454 systemd[1]: Hostname set to . Dec 12 18:33:39.607467 systemd[1]: Initializing machine ID from VM UUID. Dec 12 18:33:39.607480 zram_generator::config[1081]: No configuration found. Dec 12 18:33:39.607498 kernel: Guest personality initialized and is inactive Dec 12 18:33:39.607509 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:33:39.607521 kernel: Initialized host personality Dec 12 18:33:39.607535 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:33:39.607548 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:33:39.607563 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 18:33:39.607577 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:33:39.607590 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:33:39.607603 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:33:39.607617 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:33:39.607629 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:33:39.607642 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:33:39.609145 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:33:39.609164 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:33:39.609178 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:33:39.609192 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:33:39.609207 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:33:39.609226 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:33:39.609245 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:33:39.609259 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:33:39.609277 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:33:39.609290 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:33:39.609303 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:33:39.609318 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:33:39.609330 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:33:39.609344 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:33:39.609356 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:33:39.609373 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:33:39.609385 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:33:39.609416 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:33:39.609429 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:33:39.609441 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:33:39.609455 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:33:39.609468 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:33:39.609480 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:33:39.609493 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:33:39.609509 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:33:39.609522 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:33:39.609536 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:33:39.609548 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:33:39.609562 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:33:39.609574 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:33:39.609586 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:33:39.609598 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:33:39.609610 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:33:39.609626 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:33:39.609638 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:33:39.609654 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:33:39.609667 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:33:39.609680 systemd[1]: Reached target machines.target - Containers. Dec 12 18:33:39.609697 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:33:39.609711 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:33:39.609723 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:33:39.609735 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:33:39.609750 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:33:39.609762 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:33:39.609775 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:33:39.609788 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:33:39.609801 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:33:39.609814 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:33:39.609826 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:33:39.609839 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:33:39.609854 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:33:39.609867 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:33:39.609880 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:33:39.609892 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:33:39.609908 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:33:39.609923 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:33:39.609936 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:33:39.609948 kernel: fuse: init (API version 7.41) Dec 12 18:33:39.609962 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:33:39.609979 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:33:39.609996 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 18:33:39.610008 systemd[1]: Stopped verity-setup.service. Dec 12 18:33:39.610021 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:33:39.611130 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:33:39.611165 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:33:39.611205 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:33:39.611226 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:33:39.611247 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:33:39.611268 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:33:39.611298 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:33:39.611318 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:33:39.611337 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:33:39.611357 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:33:39.611376 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:33:39.611395 kernel: ACPI: bus type drm_connector registered Dec 12 18:33:39.611418 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:33:39.611436 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:33:39.611454 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:33:39.611474 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:33:39.611487 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:33:39.611512 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:33:39.611525 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:33:39.611538 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:33:39.611551 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:33:39.611564 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:33:39.611581 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:33:39.611597 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:33:39.611613 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:33:39.611626 kernel: loop: module loaded Dec 12 18:33:39.611638 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:33:39.611651 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:33:39.611663 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:33:39.611676 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:33:39.611689 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:33:39.611701 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:33:39.611717 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:33:39.611784 systemd-journald[1148]: Collecting audit messages is disabled. Dec 12 18:33:39.611824 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:33:39.611837 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:33:39.611850 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:33:39.611863 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:33:39.611881 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:33:39.611895 systemd-journald[1148]: Journal started Dec 12 18:33:39.611924 systemd-journald[1148]: Runtime Journal (/run/log/journal/adced3ed169d4e479fb9b8e9fbaebd3d) is 4.9M, max 39.2M, 34.3M free. Dec 12 18:33:39.169874 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:33:39.615444 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:33:39.195595 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 18:33:39.196143 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:33:39.620908 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:33:39.624948 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:33:39.626102 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:33:39.645306 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:33:39.657869 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:33:39.664665 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:33:39.706213 systemd-journald[1148]: Time spent on flushing to /var/log/journal/adced3ed169d4e479fb9b8e9fbaebd3d is 48.920ms for 1010 entries. Dec 12 18:33:39.706213 systemd-journald[1148]: System Journal (/var/log/journal/adced3ed169d4e479fb9b8e9fbaebd3d) is 8M, max 195.6M, 187.6M free. Dec 12 18:33:39.774636 kernel: loop0: detected capacity change from 0 to 229808 Dec 12 18:33:39.774765 systemd-journald[1148]: Received client request to flush runtime journal. Dec 12 18:33:39.774810 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:33:39.715146 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:33:39.779667 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:33:39.789200 kernel: loop1: detected capacity change from 0 to 128560 Dec 12 18:33:39.789609 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:33:39.799716 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:33:39.824071 kernel: loop2: detected capacity change from 0 to 8 Dec 12 18:33:39.841635 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:33:39.849083 kernel: loop3: detected capacity change from 0 to 110984 Dec 12 18:33:39.897061 kernel: loop4: detected capacity change from 0 to 229808 Dec 12 18:33:39.918053 kernel: loop5: detected capacity change from 0 to 128560 Dec 12 18:33:39.926586 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:33:39.929182 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:33:39.948062 kernel: loop6: detected capacity change from 0 to 8 Dec 12 18:33:39.952068 kernel: loop7: detected capacity change from 0 to 110984 Dec 12 18:33:39.963502 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Dec 12 18:33:39.964512 (sd-merge)[1223]: Merged extensions into '/usr'. Dec 12 18:33:39.974228 systemd[1]: Reload requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:33:39.974247 systemd[1]: Reloading... Dec 12 18:33:39.998659 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Dec 12 18:33:40.001094 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Dec 12 18:33:40.184089 zram_generator::config[1253]: No configuration found. Dec 12 18:33:40.424129 ldconfig[1172]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:33:40.647972 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:33:40.648798 systemd[1]: Reloading finished in 673 ms. Dec 12 18:33:40.667982 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:33:40.669327 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:33:40.670578 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:33:40.679257 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:33:40.691223 systemd[1]: Starting ensure-sysext.service... Dec 12 18:33:40.694677 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:33:40.712334 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:33:40.747194 systemd[1]: Reload requested from client PID 1298 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:33:40.747217 systemd[1]: Reloading... Dec 12 18:33:40.763363 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:33:40.765463 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:33:40.766497 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:33:40.768613 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 18:33:40.773470 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 18:33:40.776384 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Dec 12 18:33:40.777583 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Dec 12 18:33:40.788204 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:33:40.788363 systemd-tmpfiles[1299]: Skipping /boot Dec 12 18:33:40.820660 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:33:40.822220 systemd-tmpfiles[1299]: Skipping /boot Dec 12 18:33:40.913096 zram_generator::config[1327]: No configuration found. Dec 12 18:33:41.139926 systemd[1]: Reloading finished in 392 ms. Dec 12 18:33:41.163720 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:33:41.170687 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:33:41.179242 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:33:41.181639 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:33:41.185881 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:33:41.193016 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:33:41.198010 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:33:41.202808 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:33:41.222292 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:33:41.226639 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:33:41.227015 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:33:41.231492 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:33:41.235558 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:33:41.248276 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:33:41.249065 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:33:41.249253 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:33:41.249378 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:33:41.259929 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:33:41.260354 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:33:41.260688 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:33:41.260917 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:33:41.261139 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:33:41.276246 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:33:41.286937 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:33:41.287396 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:33:41.298707 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:33:41.299935 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:33:41.300006 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:33:41.300579 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:33:41.301884 systemd[1]: Finished ensure-sysext.service. Dec 12 18:33:41.304067 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:33:41.323083 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 18:33:41.323874 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:33:41.326299 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:33:41.326525 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:33:41.329636 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:33:41.329828 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:33:41.330584 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:33:41.333935 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:33:41.334679 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:33:41.336317 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:33:41.352523 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:33:41.354479 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:33:41.354667 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:33:41.362234 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:33:41.364709 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:33:41.369585 systemd-udevd[1376]: Using default interface naming scheme 'v255'. Dec 12 18:33:41.410590 augenrules[1418]: No rules Dec 12 18:33:41.411863 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:33:41.412619 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:33:41.415852 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:33:41.419953 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:33:41.422461 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:33:41.557561 systemd-networkd[1428]: lo: Link UP Dec 12 18:33:41.558141 systemd-networkd[1428]: lo: Gained carrier Dec 12 18:33:41.559782 systemd-networkd[1428]: Enumeration completed Dec 12 18:33:41.560204 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:33:41.566517 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:33:41.576322 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:33:41.594197 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 18:33:41.594817 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:33:41.622432 systemd-resolved[1375]: Positive Trust Anchors: Dec 12 18:33:41.623472 systemd-resolved[1375]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:33:41.623539 systemd-resolved[1375]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:33:41.631954 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:33:41.638548 systemd-resolved[1375]: Using system hostname 'ci-4459.2.2-8-48b4194eb4'. Dec 12 18:33:41.643777 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:33:41.644976 systemd[1]: Reached target network.target - Network. Dec 12 18:33:41.645409 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:33:41.645977 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:33:41.646514 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:33:41.647154 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:33:41.647894 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:33:41.648678 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:33:41.650238 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:33:41.650859 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:33:41.651445 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:33:41.651482 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:33:41.651910 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:33:41.653456 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:33:41.655845 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:33:41.660488 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:33:41.661384 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:33:41.662134 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:33:41.673181 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:33:41.674535 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:33:41.675822 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:33:41.678932 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:33:41.679438 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:33:41.679931 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:33:41.679964 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:33:41.683182 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:33:41.687230 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 18:33:41.689541 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:33:41.699767 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:33:41.703216 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:33:41.716346 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:33:41.716842 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:33:41.718769 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:33:41.732085 jq[1461]: false Dec 12 18:33:41.730349 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:33:41.736141 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:33:41.740325 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:33:41.743687 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:33:41.753170 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:33:41.754943 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:33:41.755570 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:33:41.763587 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:33:41.764894 oslogin_cache_refresh[1463]: Refreshing passwd entry cache Dec 12 18:33:41.768577 google_oslogin_nss_cache[1463]: oslogin_cache_refresh[1463]: Refreshing passwd entry cache Dec 12 18:33:41.772098 google_oslogin_nss_cache[1463]: oslogin_cache_refresh[1463]: Failure getting users, quitting Dec 12 18:33:41.772098 google_oslogin_nss_cache[1463]: oslogin_cache_refresh[1463]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:33:41.772098 google_oslogin_nss_cache[1463]: oslogin_cache_refresh[1463]: Refreshing group entry cache Dec 12 18:33:41.772098 google_oslogin_nss_cache[1463]: oslogin_cache_refresh[1463]: Failure getting groups, quitting Dec 12 18:33:41.772098 google_oslogin_nss_cache[1463]: oslogin_cache_refresh[1463]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:33:41.769080 oslogin_cache_refresh[1463]: Failure getting users, quitting Dec 12 18:33:41.769100 oslogin_cache_refresh[1463]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:33:41.769147 oslogin_cache_refresh[1463]: Refreshing group entry cache Dec 12 18:33:41.769636 oslogin_cache_refresh[1463]: Failure getting groups, quitting Dec 12 18:33:41.769645 oslogin_cache_refresh[1463]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:33:41.774302 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:33:41.791276 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:33:41.792242 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:33:41.792440 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:33:41.792719 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:33:41.792879 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:33:41.825092 jq[1472]: true Dec 12 18:33:41.849165 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:33:41.849535 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:33:41.854202 tar[1474]: linux-amd64/LICENSE Dec 12 18:33:41.854202 tar[1474]: linux-amd64/helm Dec 12 18:33:41.923964 extend-filesystems[1462]: Found /dev/vda6 Dec 12 18:33:41.933671 jq[1482]: true Dec 12 18:33:41.935680 (ntainerd)[1494]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 18:33:41.975308 extend-filesystems[1462]: Found /dev/vda9 Dec 12 18:33:41.990989 systemd-networkd[1428]: eth0: Configuring with /run/systemd/network/10-96:e1:85:53:2b:40.network. Dec 12 18:33:41.997018 extend-filesystems[1462]: Checking size of /dev/vda9 Dec 12 18:33:41.996169 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:33:41.996444 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:33:42.012669 systemd-networkd[1428]: eth0: Link UP Dec 12 18:33:42.022265 update_engine[1469]: I20251212 18:33:42.018368 1469 main.cc:92] Flatcar Update Engine starting Dec 12 18:33:42.017979 dbus-daemon[1459]: [system] SELinux support is enabled Dec 12 18:33:42.012953 systemd-networkd[1428]: eth0: Gained carrier Dec 12 18:33:42.026623 coreos-metadata[1458]: Dec 12 18:33:42.025 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 12 18:33:42.024509 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:33:42.040761 coreos-metadata[1458]: Dec 12 18:33:42.027 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) Dec 12 18:33:42.028192 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:33:42.028232 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:33:42.028966 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:33:42.028986 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:33:42.045898 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Dec 12 18:33:42.048872 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Dec 12 18:33:42.054168 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Dec 12 18:33:42.056109 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:33:42.077161 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:33:42.078130 update_engine[1469]: I20251212 18:33:42.077505 1469 update_check_scheduler.cc:74] Next update check in 9m33s Dec 12 18:33:42.080614 extend-filesystems[1462]: Resized partition /dev/vda9 Dec 12 18:33:42.103386 extend-filesystems[1519]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:33:42.112078 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Dec 12 18:33:42.121721 systemd-networkd[1428]: eth1: Configuring with /run/systemd/network/10-fa:bd:0e:f7:c5:0f.network. Dec 12 18:33:42.125374 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:33:42.127140 systemd-networkd[1428]: eth1: Link UP Dec 12 18:33:42.132012 systemd-networkd[1428]: eth1: Gained carrier Dec 12 18:33:42.713759 systemd-timesyncd[1393]: Contacted time server 134.215.155.177:123 (0.flatcar.pool.ntp.org). Dec 12 18:33:42.713897 systemd-timesyncd[1393]: Initial clock synchronization to Fri 2025-12-12 18:33:42.713395 UTC. Dec 12 18:33:42.713999 systemd-resolved[1375]: Clock change detected. Flushing caches. Dec 12 18:33:42.748298 kernel: ISO 9660 Extensions: RRIP_1991A Dec 12 18:33:42.750439 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Dec 12 18:33:42.753267 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Dec 12 18:33:42.792395 systemd-logind[1468]: New seat seat0. Dec 12 18:33:42.793265 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:33:42.868476 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 18:33:42.886959 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:33:42.872427 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:33:42.893594 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:33:42.902569 systemd[1]: Starting sshkeys.service... Dec 12 18:33:42.935293 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 12 18:33:42.960583 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:33:42.977504 extend-filesystems[1519]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 18:33:42.977504 extend-filesystems[1519]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 12 18:33:42.977504 extend-filesystems[1519]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 12 18:33:42.989947 extend-filesystems[1462]: Resized filesystem in /dev/vda9 Dec 12 18:33:42.986750 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:33:42.987080 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:33:43.007497 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 18:33:43.012455 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 18:33:43.014980 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:33:43.165324 coreos-metadata[1550]: Dec 12 18:33:43.164 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 12 18:33:43.182963 coreos-metadata[1550]: Dec 12 18:33:43.181 INFO Fetch successful Dec 12 18:33:43.211377 unknown[1550]: wrote ssh authorized keys file for user: core Dec 12 18:33:43.220642 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:33:43.225540 containerd[1494]: time="2025-12-12T18:33:43Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:33:43.227620 containerd[1494]: time="2025-12-12T18:33:43.226604156Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 18:33:43.247849 containerd[1494]: time="2025-12-12T18:33:43.247775073Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.911µs" Dec 12 18:33:43.247849 containerd[1494]: time="2025-12-12T18:33:43.247831272Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:33:43.247849 containerd[1494]: time="2025-12-12T18:33:43.247861964Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:33:43.249247 containerd[1494]: time="2025-12-12T18:33:43.248078672Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:33:43.249247 containerd[1494]: time="2025-12-12T18:33:43.248099753Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:33:43.249247 containerd[1494]: time="2025-12-12T18:33:43.248132283Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:33:43.249247 containerd[1494]: time="2025-12-12T18:33:43.248187823Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:33:43.249247 containerd[1494]: time="2025-12-12T18:33:43.248200120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:33:43.249247 containerd[1494]: time="2025-12-12T18:33:43.248549268Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:33:43.249247 containerd[1494]: time="2025-12-12T18:33:43.248575676Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:33:43.249247 containerd[1494]: time="2025-12-12T18:33:43.248593468Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:33:43.249247 containerd[1494]: time="2025-12-12T18:33:43.248607990Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:33:43.249247 containerd[1494]: time="2025-12-12T18:33:43.248715275Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:33:43.249247 containerd[1494]: time="2025-12-12T18:33:43.248933839Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:33:43.249629 containerd[1494]: time="2025-12-12T18:33:43.248964449Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:33:43.249629 containerd[1494]: time="2025-12-12T18:33:43.248975968Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:33:43.249629 containerd[1494]: time="2025-12-12T18:33:43.249006456Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:33:43.249629 containerd[1494]: time="2025-12-12T18:33:43.249324159Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:33:43.249629 containerd[1494]: time="2025-12-12T18:33:43.249421550Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:33:43.261621 containerd[1494]: time="2025-12-12T18:33:43.259696873Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:33:43.261621 containerd[1494]: time="2025-12-12T18:33:43.259802418Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:33:43.261621 containerd[1494]: time="2025-12-12T18:33:43.259825133Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:33:43.261621 containerd[1494]: time="2025-12-12T18:33:43.259838748Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:33:43.261621 containerd[1494]: time="2025-12-12T18:33:43.259851190Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:33:43.261621 containerd[1494]: time="2025-12-12T18:33:43.259861709Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:33:43.261621 containerd[1494]: time="2025-12-12T18:33:43.259881136Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:33:43.261621 containerd[1494]: time="2025-12-12T18:33:43.259893320Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:33:43.261621 containerd[1494]: time="2025-12-12T18:33:43.259904300Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:33:43.261621 containerd[1494]: time="2025-12-12T18:33:43.259915427Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:33:43.261621 containerd[1494]: time="2025-12-12T18:33:43.259925401Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:33:43.261621 containerd[1494]: time="2025-12-12T18:33:43.259939826Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:33:43.261621 containerd[1494]: time="2025-12-12T18:33:43.260122038Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:33:43.261621 containerd[1494]: time="2025-12-12T18:33:43.260142690Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:33:43.262009 containerd[1494]: time="2025-12-12T18:33:43.260156452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:33:43.262009 containerd[1494]: time="2025-12-12T18:33:43.260168983Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:33:43.262009 containerd[1494]: time="2025-12-12T18:33:43.260182084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:33:43.262009 containerd[1494]: time="2025-12-12T18:33:43.260195368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:33:43.262009 containerd[1494]: time="2025-12-12T18:33:43.260219434Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:33:43.262009 containerd[1494]: time="2025-12-12T18:33:43.260252916Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:33:43.262009 containerd[1494]: time="2025-12-12T18:33:43.260264812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:33:43.262009 containerd[1494]: time="2025-12-12T18:33:43.260281980Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:33:43.262009 containerd[1494]: time="2025-12-12T18:33:43.260294963Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:33:43.262009 containerd[1494]: time="2025-12-12T18:33:43.260359682Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:33:43.262009 containerd[1494]: time="2025-12-12T18:33:43.260380607Z" level=info msg="Start snapshots syncer" Dec 12 18:33:43.262009 containerd[1494]: time="2025-12-12T18:33:43.260412236Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:33:43.264577 containerd[1494]: time="2025-12-12T18:33:43.260741329Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:33:43.264577 containerd[1494]: time="2025-12-12T18:33:43.260797150Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:33:43.264819 containerd[1494]: time="2025-12-12T18:33:43.260856686Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:33:43.264819 containerd[1494]: time="2025-12-12T18:33:43.260977369Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:33:43.264819 containerd[1494]: time="2025-12-12T18:33:43.260996114Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:33:43.264819 containerd[1494]: time="2025-12-12T18:33:43.261006808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:33:43.264819 containerd[1494]: time="2025-12-12T18:33:43.261019890Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:33:43.264819 containerd[1494]: time="2025-12-12T18:33:43.261033608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:33:43.264819 containerd[1494]: time="2025-12-12T18:33:43.261046772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:33:43.264819 containerd[1494]: time="2025-12-12T18:33:43.261074068Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:33:43.264819 containerd[1494]: time="2025-12-12T18:33:43.261117136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:33:43.264819 containerd[1494]: time="2025-12-12T18:33:43.261135979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:33:43.264819 containerd[1494]: time="2025-12-12T18:33:43.261151723Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:33:43.264819 containerd[1494]: time="2025-12-12T18:33:43.261182307Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:33:43.264819 containerd[1494]: time="2025-12-12T18:33:43.261674355Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:33:43.264819 containerd[1494]: time="2025-12-12T18:33:43.261697590Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:33:43.270050 containerd[1494]: time="2025-12-12T18:33:43.261708887Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:33:43.270050 containerd[1494]: time="2025-12-12T18:33:43.261717757Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:33:43.270050 containerd[1494]: time="2025-12-12T18:33:43.261728907Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:33:43.270050 containerd[1494]: time="2025-12-12T18:33:43.261749717Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:33:43.270050 containerd[1494]: time="2025-12-12T18:33:43.261769651Z" level=info msg="runtime interface created" Dec 12 18:33:43.270050 containerd[1494]: time="2025-12-12T18:33:43.261777699Z" level=info msg="created NRI interface" Dec 12 18:33:43.270050 containerd[1494]: time="2025-12-12T18:33:43.261811211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:33:43.270050 containerd[1494]: time="2025-12-12T18:33:43.261836688Z" level=info msg="Connect containerd service" Dec 12 18:33:43.270050 containerd[1494]: time="2025-12-12T18:33:43.261872497Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:33:43.270050 containerd[1494]: time="2025-12-12T18:33:43.262835674Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:33:43.289803 update-ssh-keys[1556]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:33:43.293753 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 18:33:43.306828 systemd[1]: Finished sshkeys.service. Dec 12 18:33:43.356303 sshd_keygen[1487]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:33:43.385699 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 12 18:33:43.391526 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:33:43.426382 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:33:43.426480 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 12 18:33:43.428007 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 12 18:33:43.435548 kernel: Console: switching to colour dummy device 80x25 Dec 12 18:33:43.435656 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 12 18:33:43.435681 kernel: [drm] features: -context_init Dec 12 18:33:43.438270 kernel: [drm] number of scanouts: 1 Dec 12 18:33:43.438448 kernel: [drm] number of cap sets: 0 Dec 12 18:33:43.440569 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Dec 12 18:33:43.450385 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 12 18:33:43.450484 kernel: Console: switching to colour frame buffer device 128x48 Dec 12 18:33:43.454256 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 12 18:33:43.480598 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 12 18:33:43.518701 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:33:43.527395 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:33:43.536402 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 12 18:33:43.599188 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:33:43.599495 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:33:43.608133 coreos-metadata[1458]: Dec 12 18:33:43.607 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 Dec 12 18:33:43.609076 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:33:43.622047 coreos-metadata[1458]: Dec 12 18:33:43.619 INFO Fetch successful Dec 12 18:33:43.680578 containerd[1494]: time="2025-12-12T18:33:43.680496551Z" level=info msg="Start subscribing containerd event" Dec 12 18:33:43.680775 containerd[1494]: time="2025-12-12T18:33:43.680617600Z" level=info msg="Start recovering state" Dec 12 18:33:43.683742 containerd[1494]: time="2025-12-12T18:33:43.680860269Z" level=info msg="Start event monitor" Dec 12 18:33:43.683742 containerd[1494]: time="2025-12-12T18:33:43.680896340Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:33:43.683742 containerd[1494]: time="2025-12-12T18:33:43.680973177Z" level=info msg="Start streaming server" Dec 12 18:33:43.683742 containerd[1494]: time="2025-12-12T18:33:43.681002663Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:33:43.683742 containerd[1494]: time="2025-12-12T18:33:43.681011769Z" level=info msg="runtime interface starting up..." Dec 12 18:33:43.683742 containerd[1494]: time="2025-12-12T18:33:43.681020227Z" level=info msg="starting plugins..." Dec 12 18:33:43.683742 containerd[1494]: time="2025-12-12T18:33:43.681043499Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:33:43.683742 containerd[1494]: time="2025-12-12T18:33:43.683517373Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:33:43.683742 containerd[1494]: time="2025-12-12T18:33:43.683611383Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:33:43.681838 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 18:33:43.683536 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:33:43.688546 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:33:43.691478 containerd[1494]: time="2025-12-12T18:33:43.688729431Z" level=info msg="containerd successfully booted in 0.465454s" Dec 12 18:33:43.698450 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:33:43.701758 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:33:43.706173 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:33:43.706668 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:33:43.826524 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:33:43.960974 systemd-logind[1468]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:33:44.003632 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:33:44.043119 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:33:44.043420 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:33:44.044606 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:33:44.051153 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:33:44.055186 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:33:44.069250 systemd-networkd[1428]: eth1: Gained IPv6LL Dec 12 18:33:44.087811 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:33:44.089158 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:33:44.095779 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:33:44.100106 tar[1474]: linux-amd64/README.md Dec 12 18:33:44.100700 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:33:44.112276 systemd-logind[1468]: Watching system buttons on /dev/input/event2 (Power Button) Dec 12 18:33:44.123289 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:33:44.133648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:33:44.141019 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:33:44.148181 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:33:44.229309 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:33:44.232016 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:33:44.289328 kernel: EDAC MC: Ver: 3.0.0 Dec 12 18:33:44.453013 systemd-networkd[1428]: eth0: Gained IPv6LL Dec 12 18:33:45.346198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:33:45.350760 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:33:45.354842 systemd[1]: Startup finished in 3.235s (kernel) + 5.839s (initrd) + 6.359s (userspace) = 15.434s. Dec 12 18:33:45.360092 (kubelet)[1648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:33:46.024071 kubelet[1648]: E1212 18:33:46.023985 1648 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:33:46.027370 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:33:46.027582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:33:46.028396 systemd[1]: kubelet.service: Consumed 1.302s CPU time, 267.7M memory peak. Dec 12 18:33:47.368690 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:33:47.370337 systemd[1]: Started sshd@0-134.199.220.206:22-147.75.109.163:55068.service - OpenSSH per-connection server daemon (147.75.109.163:55068). Dec 12 18:33:47.461196 sshd[1660]: Accepted publickey for core from 147.75.109.163 port 55068 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:33:47.463379 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:33:47.475480 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:33:47.476563 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:33:47.481344 systemd-logind[1468]: New session 1 of user core. Dec 12 18:33:47.513291 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:33:47.516799 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:33:47.532475 (systemd)[1665]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:33:47.536125 systemd-logind[1468]: New session c1 of user core. Dec 12 18:33:47.699199 systemd[1665]: Queued start job for default target default.target. Dec 12 18:33:47.715249 systemd[1665]: Created slice app.slice - User Application Slice. Dec 12 18:33:47.715532 systemd[1665]: Reached target paths.target - Paths. Dec 12 18:33:47.715704 systemd[1665]: Reached target timers.target - Timers. Dec 12 18:33:47.717601 systemd[1665]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:33:47.731935 systemd[1665]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:33:47.732060 systemd[1665]: Reached target sockets.target - Sockets. Dec 12 18:33:47.732115 systemd[1665]: Reached target basic.target - Basic System. Dec 12 18:33:47.732169 systemd[1665]: Reached target default.target - Main User Target. Dec 12 18:33:47.732202 systemd[1665]: Startup finished in 187ms. Dec 12 18:33:47.732533 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:33:47.740558 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:33:47.817442 systemd[1]: Started sshd@1-134.199.220.206:22-147.75.109.163:55072.service - OpenSSH per-connection server daemon (147.75.109.163:55072). Dec 12 18:33:47.890110 sshd[1676]: Accepted publickey for core from 147.75.109.163 port 55072 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:33:47.891790 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:33:47.899596 systemd-logind[1468]: New session 2 of user core. Dec 12 18:33:47.906508 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:33:47.971584 sshd[1679]: Connection closed by 147.75.109.163 port 55072 Dec 12 18:33:47.972138 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Dec 12 18:33:47.987339 systemd[1]: sshd@1-134.199.220.206:22-147.75.109.163:55072.service: Deactivated successfully. Dec 12 18:33:47.990014 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:33:47.991592 systemd-logind[1468]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:33:47.994652 systemd[1]: Started sshd@2-134.199.220.206:22-147.75.109.163:55074.service - OpenSSH per-connection server daemon (147.75.109.163:55074). Dec 12 18:33:47.996195 systemd-logind[1468]: Removed session 2. Dec 12 18:33:48.063330 sshd[1685]: Accepted publickey for core from 147.75.109.163 port 55074 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:33:48.065347 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:33:48.072393 systemd-logind[1468]: New session 3 of user core. Dec 12 18:33:48.090581 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:33:48.152314 sshd[1688]: Connection closed by 147.75.109.163 port 55074 Dec 12 18:33:48.153038 sshd-session[1685]: pam_unix(sshd:session): session closed for user core Dec 12 18:33:48.166286 systemd[1]: sshd@2-134.199.220.206:22-147.75.109.163:55074.service: Deactivated successfully. Dec 12 18:33:48.169105 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:33:48.170206 systemd-logind[1468]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:33:48.175954 systemd[1]: Started sshd@3-134.199.220.206:22-147.75.109.163:55088.service - OpenSSH per-connection server daemon (147.75.109.163:55088). Dec 12 18:33:48.177832 systemd-logind[1468]: Removed session 3. Dec 12 18:33:48.243676 sshd[1694]: Accepted publickey for core from 147.75.109.163 port 55088 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:33:48.245770 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:33:48.252136 systemd-logind[1468]: New session 4 of user core. Dec 12 18:33:48.258523 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:33:48.326505 sshd[1697]: Connection closed by 147.75.109.163 port 55088 Dec 12 18:33:48.327117 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Dec 12 18:33:48.336780 systemd[1]: sshd@3-134.199.220.206:22-147.75.109.163:55088.service: Deactivated successfully. Dec 12 18:33:48.339301 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:33:48.340354 systemd-logind[1468]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:33:48.344400 systemd[1]: Started sshd@4-134.199.220.206:22-147.75.109.163:55090.service - OpenSSH per-connection server daemon (147.75.109.163:55090). Dec 12 18:33:48.346167 systemd-logind[1468]: Removed session 4. Dec 12 18:33:48.407041 sshd[1703]: Accepted publickey for core from 147.75.109.163 port 55090 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:33:48.408801 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:33:48.415153 systemd-logind[1468]: New session 5 of user core. Dec 12 18:33:48.422563 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:33:48.491180 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:33:48.491517 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:33:48.509990 sudo[1707]: pam_unix(sudo:session): session closed for user root Dec 12 18:33:48.513370 sshd[1706]: Connection closed by 147.75.109.163 port 55090 Dec 12 18:33:48.514217 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Dec 12 18:33:48.528741 systemd[1]: sshd@4-134.199.220.206:22-147.75.109.163:55090.service: Deactivated successfully. Dec 12 18:33:48.531402 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:33:48.532615 systemd-logind[1468]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:33:48.535989 systemd[1]: Started sshd@5-134.199.220.206:22-147.75.109.163:55098.service - OpenSSH per-connection server daemon (147.75.109.163:55098). Dec 12 18:33:48.537657 systemd-logind[1468]: Removed session 5. Dec 12 18:33:48.603379 sshd[1713]: Accepted publickey for core from 147.75.109.163 port 55098 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:33:48.604803 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:33:48.610566 systemd-logind[1468]: New session 6 of user core. Dec 12 18:33:48.617550 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:33:48.678558 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:33:48.679249 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:33:48.685539 sudo[1718]: pam_unix(sudo:session): session closed for user root Dec 12 18:33:48.692459 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:33:48.693133 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:33:48.704914 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:33:48.748080 augenrules[1740]: No rules Dec 12 18:33:48.749694 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:33:48.750046 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:33:48.752443 sudo[1717]: pam_unix(sudo:session): session closed for user root Dec 12 18:33:48.756360 sshd[1716]: Connection closed by 147.75.109.163 port 55098 Dec 12 18:33:48.756895 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Dec 12 18:33:48.774144 systemd[1]: sshd@5-134.199.220.206:22-147.75.109.163:55098.service: Deactivated successfully. Dec 12 18:33:48.777115 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:33:48.779130 systemd-logind[1468]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:33:48.783507 systemd[1]: Started sshd@6-134.199.220.206:22-147.75.109.163:55106.service - OpenSSH per-connection server daemon (147.75.109.163:55106). Dec 12 18:33:48.784930 systemd-logind[1468]: Removed session 6. Dec 12 18:33:48.844157 sshd[1749]: Accepted publickey for core from 147.75.109.163 port 55106 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:33:48.846105 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:33:48.853134 systemd-logind[1468]: New session 7 of user core. Dec 12 18:33:48.861585 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:33:48.922627 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:33:48.922966 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:33:49.463775 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:33:49.478121 (dockerd)[1772]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:33:49.885170 dockerd[1772]: time="2025-12-12T18:33:49.884579975Z" level=info msg="Starting up" Dec 12 18:33:49.886355 dockerd[1772]: time="2025-12-12T18:33:49.886315067Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:33:49.904784 dockerd[1772]: time="2025-12-12T18:33:49.904628319Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:33:49.953954 dockerd[1772]: time="2025-12-12T18:33:49.953598276Z" level=info msg="Loading containers: start." Dec 12 18:33:49.966253 kernel: Initializing XFRM netlink socket Dec 12 18:33:50.280999 systemd-networkd[1428]: docker0: Link UP Dec 12 18:33:50.284606 dockerd[1772]: time="2025-12-12T18:33:50.284557267Z" level=info msg="Loading containers: done." Dec 12 18:33:50.305915 dockerd[1772]: time="2025-12-12T18:33:50.305143384Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:33:50.305915 dockerd[1772]: time="2025-12-12T18:33:50.305289301Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:33:50.305915 dockerd[1772]: time="2025-12-12T18:33:50.305398154Z" level=info msg="Initializing buildkit" Dec 12 18:33:50.307672 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck519498971-merged.mount: Deactivated successfully. Dec 12 18:33:50.336076 dockerd[1772]: time="2025-12-12T18:33:50.335770213Z" level=info msg="Completed buildkit initialization" Dec 12 18:33:50.342560 dockerd[1772]: time="2025-12-12T18:33:50.342470769Z" level=info msg="Daemon has completed initialization" Dec 12 18:33:50.342828 dockerd[1772]: time="2025-12-12T18:33:50.342764091Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:33:50.343317 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:33:51.337758 containerd[1494]: time="2025-12-12T18:33:51.337712839Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 12 18:33:51.919685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2280529664.mount: Deactivated successfully. Dec 12 18:33:53.212261 containerd[1494]: time="2025-12-12T18:33:53.211208802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:33:53.212948 containerd[1494]: time="2025-12-12T18:33:53.212912136Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Dec 12 18:33:53.214267 containerd[1494]: time="2025-12-12T18:33:53.214212502Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:33:53.217580 containerd[1494]: time="2025-12-12T18:33:53.217528606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:33:53.218976 containerd[1494]: time="2025-12-12T18:33:53.218926282Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.88117139s" Dec 12 18:33:53.218976 containerd[1494]: time="2025-12-12T18:33:53.218978457Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 12 18:33:53.219689 containerd[1494]: time="2025-12-12T18:33:53.219659707Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 12 18:33:54.579916 containerd[1494]: time="2025-12-12T18:33:54.579853194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:33:54.580934 containerd[1494]: time="2025-12-12T18:33:54.580886978Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Dec 12 18:33:54.582059 containerd[1494]: time="2025-12-12T18:33:54.581592090Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:33:54.584310 containerd[1494]: time="2025-12-12T18:33:54.584274455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:33:54.585527 containerd[1494]: time="2025-12-12T18:33:54.585491313Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.365196681s" Dec 12 18:33:54.585662 containerd[1494]: time="2025-12-12T18:33:54.585647081Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 12 18:33:54.586262 containerd[1494]: time="2025-12-12T18:33:54.586206834Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 12 18:33:55.908386 containerd[1494]: time="2025-12-12T18:33:55.908317348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:33:55.910093 containerd[1494]: time="2025-12-12T18:33:55.910031852Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Dec 12 18:33:55.910660 containerd[1494]: time="2025-12-12T18:33:55.910621280Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:33:55.914411 containerd[1494]: time="2025-12-12T18:33:55.914342257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:33:55.915827 containerd[1494]: time="2025-12-12T18:33:55.915195056Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.328804075s" Dec 12 18:33:55.915827 containerd[1494]: time="2025-12-12T18:33:55.915271832Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 12 18:33:55.916085 containerd[1494]: time="2025-12-12T18:33:55.916055838Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 12 18:33:56.278099 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:33:56.281491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:33:56.557965 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:33:56.569788 (kubelet)[2070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:33:56.655629 kubelet[2070]: E1212 18:33:56.655583 2070 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:33:56.661870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:33:56.662094 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:33:56.664488 systemd[1]: kubelet.service: Consumed 264ms CPU time, 111M memory peak. Dec 12 18:33:56.999179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount68080792.mount: Deactivated successfully. Dec 12 18:33:57.597283 containerd[1494]: time="2025-12-12T18:33:57.596813475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:33:57.597974 containerd[1494]: time="2025-12-12T18:33:57.597946373Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Dec 12 18:33:57.598414 containerd[1494]: time="2025-12-12T18:33:57.598370356Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:33:57.599915 containerd[1494]: time="2025-12-12T18:33:57.599867652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:33:57.600366 containerd[1494]: time="2025-12-12T18:33:57.600343322Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.684255067s" Dec 12 18:33:57.600463 containerd[1494]: time="2025-12-12T18:33:57.600448762Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 12 18:33:57.601046 containerd[1494]: time="2025-12-12T18:33:57.600941733Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 12 18:33:57.602575 systemd-resolved[1375]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Dec 12 18:33:58.084387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4025207336.mount: Deactivated successfully. Dec 12 18:33:58.995178 containerd[1494]: time="2025-12-12T18:33:58.995124277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:33:58.996068 containerd[1494]: time="2025-12-12T18:33:58.996030565Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Dec 12 18:33:58.996576 containerd[1494]: time="2025-12-12T18:33:58.996541973Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:33:58.999874 containerd[1494]: time="2025-12-12T18:33:58.999834687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:33:59.001578 containerd[1494]: time="2025-12-12T18:33:59.001535574Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.400406605s" Dec 12 18:33:59.001578 containerd[1494]: time="2025-12-12T18:33:59.001580079Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 12 18:33:59.002361 containerd[1494]: time="2025-12-12T18:33:59.002336153Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 18:33:59.485756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount550827320.mount: Deactivated successfully. Dec 12 18:33:59.492377 containerd[1494]: time="2025-12-12T18:33:59.492128353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:33:59.493601 containerd[1494]: time="2025-12-12T18:33:59.493559313Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 12 18:33:59.495248 containerd[1494]: time="2025-12-12T18:33:59.494978096Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:33:59.496173 containerd[1494]: time="2025-12-12T18:33:59.496142497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:33:59.497730 containerd[1494]: time="2025-12-12T18:33:59.497700820Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 495.255535ms" Dec 12 18:33:59.497973 containerd[1494]: time="2025-12-12T18:33:59.497831385Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 12 18:33:59.498657 containerd[1494]: time="2025-12-12T18:33:59.498600852Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 12 18:34:00.019050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3124591735.mount: Deactivated successfully. Dec 12 18:34:00.708546 systemd-resolved[1375]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Dec 12 18:34:01.925837 containerd[1494]: time="2025-12-12T18:34:01.925753185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:01.927251 containerd[1494]: time="2025-12-12T18:34:01.927187731Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Dec 12 18:34:01.929427 containerd[1494]: time="2025-12-12T18:34:01.927614135Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:01.931659 containerd[1494]: time="2025-12-12T18:34:01.931598804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:01.933134 containerd[1494]: time="2025-12-12T18:34:01.933088269Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.434454797s" Dec 12 18:34:01.933325 containerd[1494]: time="2025-12-12T18:34:01.933306946Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 12 18:34:05.918081 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:34:05.918736 systemd[1]: kubelet.service: Consumed 264ms CPU time, 111M memory peak. Dec 12 18:34:05.921319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:34:05.955380 systemd[1]: Reload requested from client PID 2219 ('systemctl') (unit session-7.scope)... Dec 12 18:34:05.955397 systemd[1]: Reloading... Dec 12 18:34:06.085268 zram_generator::config[2262]: No configuration found. Dec 12 18:34:06.339645 systemd[1]: Reloading finished in 383 ms. Dec 12 18:34:06.419322 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:34:06.419713 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:34:06.420103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:34:06.420171 systemd[1]: kubelet.service: Consumed 118ms CPU time, 97.9M memory peak. Dec 12 18:34:06.422731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:34:06.586739 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:34:06.599646 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:34:06.664344 kubelet[2316]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:34:06.664758 kubelet[2316]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:34:06.664816 kubelet[2316]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:34:06.664965 kubelet[2316]: I1212 18:34:06.664924 2316 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:34:07.115764 kubelet[2316]: I1212 18:34:07.115715 2316 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 18:34:07.117823 kubelet[2316]: I1212 18:34:07.117791 2316 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:34:07.119070 kubelet[2316]: I1212 18:34:07.119038 2316 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:34:07.153691 kubelet[2316]: I1212 18:34:07.153651 2316 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:34:07.155440 kubelet[2316]: E1212 18:34:07.155304 2316 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://134.199.220.206:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 134.199.220.206:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 18:34:07.171756 kubelet[2316]: I1212 18:34:07.171728 2316 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:34:07.178295 kubelet[2316]: I1212 18:34:07.178246 2316 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:34:07.182938 kubelet[2316]: I1212 18:34:07.182828 2316 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:34:07.186837 kubelet[2316]: I1212 18:34:07.182908 2316 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-8-48b4194eb4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:34:07.186837 kubelet[2316]: I1212 18:34:07.186843 2316 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:34:07.187111 kubelet[2316]: I1212 18:34:07.186876 2316 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 18:34:07.187111 kubelet[2316]: I1212 18:34:07.187099 2316 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:34:07.190902 kubelet[2316]: I1212 18:34:07.190387 2316 kubelet.go:480] "Attempting to sync node with API server" Dec 12 18:34:07.190902 kubelet[2316]: I1212 18:34:07.190434 2316 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:34:07.190902 kubelet[2316]: I1212 18:34:07.190487 2316 kubelet.go:386] "Adding apiserver pod source" Dec 12 18:34:07.192875 kubelet[2316]: I1212 18:34:07.192344 2316 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:34:07.208187 kubelet[2316]: E1212 18:34:07.208086 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://134.199.220.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 134.199.220.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:34:07.209312 kubelet[2316]: I1212 18:34:07.208395 2316 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:34:07.209312 kubelet[2316]: I1212 18:34:07.209023 2316 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:34:07.212259 kubelet[2316]: W1212 18:34:07.211108 2316 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:34:07.212448 kubelet[2316]: E1212 18:34:07.212400 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://134.199.220.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-8-48b4194eb4&limit=500&resourceVersion=0\": dial tcp 134.199.220.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:34:07.215714 kubelet[2316]: I1212 18:34:07.215689 2316 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:34:07.215913 kubelet[2316]: I1212 18:34:07.215901 2316 server.go:1289] "Started kubelet" Dec 12 18:34:07.217926 kubelet[2316]: I1212 18:34:07.217900 2316 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:34:07.222341 kubelet[2316]: E1212 18:34:07.220364 2316 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://134.199.220.206:6443/api/v1/namespaces/default/events\": dial tcp 134.199.220.206:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-8-48b4194eb4.18808b82a85c16bb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-8-48b4194eb4,UID:ci-4459.2.2-8-48b4194eb4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-8-48b4194eb4,},FirstTimestamp:2025-12-12 18:34:07.215851195 +0000 UTC m=+0.608875753,LastTimestamp:2025-12-12 18:34:07.215851195 +0000 UTC m=+0.608875753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-8-48b4194eb4,}" Dec 12 18:34:07.222954 kubelet[2316]: I1212 18:34:07.222916 2316 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:34:07.225014 kubelet[2316]: I1212 18:34:07.224002 2316 server.go:317] "Adding debug handlers to kubelet server" Dec 12 18:34:07.228828 kubelet[2316]: I1212 18:34:07.228632 2316 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:34:07.229133 kubelet[2316]: I1212 18:34:07.228964 2316 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:34:07.229360 kubelet[2316]: I1212 18:34:07.229315 2316 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:34:07.229708 kubelet[2316]: I1212 18:34:07.229479 2316 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:34:07.230759 kubelet[2316]: E1212 18:34:07.230015 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-8-48b4194eb4\" not found" Dec 12 18:34:07.231613 kubelet[2316]: I1212 18:34:07.231593 2316 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:34:07.231769 kubelet[2316]: I1212 18:34:07.231751 2316 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:34:07.233140 kubelet[2316]: E1212 18:34:07.233116 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://134.199.220.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 134.199.220.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:34:07.233555 kubelet[2316]: E1212 18:34:07.233533 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.220.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-8-48b4194eb4?timeout=10s\": dial tcp 134.199.220.206:6443: connect: connection refused" interval="200ms" Dec 12 18:34:07.233743 kubelet[2316]: E1212 18:34:07.233728 2316 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:34:07.234497 kubelet[2316]: I1212 18:34:07.234480 2316 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:34:07.234648 kubelet[2316]: I1212 18:34:07.234634 2316 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:34:07.238344 kubelet[2316]: I1212 18:34:07.238318 2316 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:34:07.258081 kubelet[2316]: I1212 18:34:07.258028 2316 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 18:34:07.260729 kubelet[2316]: I1212 18:34:07.260605 2316 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 18:34:07.260864 kubelet[2316]: I1212 18:34:07.260738 2316 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 18:34:07.261794 kubelet[2316]: I1212 18:34:07.261747 2316 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:34:07.261794 kubelet[2316]: I1212 18:34:07.261776 2316 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 18:34:07.262689 kubelet[2316]: E1212 18:34:07.262266 2316 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:34:07.267420 kubelet[2316]: E1212 18:34:07.267375 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://134.199.220.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 134.199.220.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:34:07.272437 kubelet[2316]: I1212 18:34:07.272403 2316 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:34:07.272692 kubelet[2316]: I1212 18:34:07.272631 2316 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:34:07.272692 kubelet[2316]: I1212 18:34:07.272659 2316 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:34:07.274478 kubelet[2316]: I1212 18:34:07.274451 2316 policy_none.go:49] "None policy: Start" Dec 12 18:34:07.274674 kubelet[2316]: I1212 18:34:07.274616 2316 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:34:07.274674 kubelet[2316]: I1212 18:34:07.274634 2316 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:34:07.281488 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:34:07.291938 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:34:07.297214 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:34:07.306616 kubelet[2316]: E1212 18:34:07.306573 2316 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:34:07.306826 kubelet[2316]: I1212 18:34:07.306802 2316 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:34:07.306865 kubelet[2316]: I1212 18:34:07.306818 2316 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:34:07.307762 kubelet[2316]: I1212 18:34:07.307631 2316 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:34:07.313126 kubelet[2316]: E1212 18:34:07.313049 2316 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:34:07.313335 kubelet[2316]: E1212 18:34:07.313177 2316 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-8-48b4194eb4\" not found" Dec 12 18:34:07.377072 systemd[1]: Created slice kubepods-burstable-pod56951acf31931780745ba88d87f81def.slice - libcontainer container kubepods-burstable-pod56951acf31931780745ba88d87f81def.slice. Dec 12 18:34:07.398856 kubelet[2316]: E1212 18:34:07.398799 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-8-48b4194eb4\" not found" node="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:07.404128 systemd[1]: Created slice kubepods-burstable-poddfef1720dc76009c61501c1368644bb9.slice - libcontainer container kubepods-burstable-poddfef1720dc76009c61501c1368644bb9.slice. Dec 12 18:34:07.407385 kubelet[2316]: E1212 18:34:07.406973 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-8-48b4194eb4\" not found" node="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:07.409109 kubelet[2316]: I1212 18:34:07.409081 2316 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:07.409652 kubelet[2316]: E1212 18:34:07.409555 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://134.199.220.206:6443/api/v1/nodes\": dial tcp 134.199.220.206:6443: connect: connection refused" node="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:07.411797 systemd[1]: Created slice kubepods-burstable-pod095fccc87867e88fef10e13233f9d38a.slice - libcontainer container kubepods-burstable-pod095fccc87867e88fef10e13233f9d38a.slice. Dec 12 18:34:07.414023 kubelet[2316]: E1212 18:34:07.413976 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-8-48b4194eb4\" not found" node="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:07.434435 kubelet[2316]: E1212 18:34:07.434386 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.220.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-8-48b4194eb4?timeout=10s\": dial tcp 134.199.220.206:6443: connect: connection refused" interval="400ms" Dec 12 18:34:07.532997 kubelet[2316]: I1212 18:34:07.532825 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dfef1720dc76009c61501c1368644bb9-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-8-48b4194eb4\" (UID: \"dfef1720dc76009c61501c1368644bb9\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:07.532997 kubelet[2316]: I1212 18:34:07.532942 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dfef1720dc76009c61501c1368644bb9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-8-48b4194eb4\" (UID: \"dfef1720dc76009c61501c1368644bb9\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:07.532997 kubelet[2316]: I1212 18:34:07.533009 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dfef1720dc76009c61501c1368644bb9-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-8-48b4194eb4\" (UID: \"dfef1720dc76009c61501c1368644bb9\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:07.533288 kubelet[2316]: I1212 18:34:07.533034 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/095fccc87867e88fef10e13233f9d38a-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-8-48b4194eb4\" (UID: \"095fccc87867e88fef10e13233f9d38a\") " pod="kube-system/kube-scheduler-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:07.533288 kubelet[2316]: I1212 18:34:07.533052 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56951acf31931780745ba88d87f81def-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-8-48b4194eb4\" (UID: \"56951acf31931780745ba88d87f81def\") " pod="kube-system/kube-apiserver-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:07.534034 kubelet[2316]: I1212 18:34:07.533648 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56951acf31931780745ba88d87f81def-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-8-48b4194eb4\" (UID: \"56951acf31931780745ba88d87f81def\") " pod="kube-system/kube-apiserver-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:07.534034 kubelet[2316]: I1212 18:34:07.533714 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56951acf31931780745ba88d87f81def-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-8-48b4194eb4\" (UID: \"56951acf31931780745ba88d87f81def\") " pod="kube-system/kube-apiserver-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:07.534034 kubelet[2316]: I1212 18:34:07.533753 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dfef1720dc76009c61501c1368644bb9-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-8-48b4194eb4\" (UID: \"dfef1720dc76009c61501c1368644bb9\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:07.534034 kubelet[2316]: I1212 18:34:07.533792 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dfef1720dc76009c61501c1368644bb9-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-8-48b4194eb4\" (UID: \"dfef1720dc76009c61501c1368644bb9\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:07.610951 kubelet[2316]: I1212 18:34:07.610906 2316 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:07.611382 kubelet[2316]: E1212 18:34:07.611330 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://134.199.220.206:6443/api/v1/nodes\": dial tcp 134.199.220.206:6443: connect: connection refused" node="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:07.701706 kubelet[2316]: E1212 18:34:07.701648 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:07.702751 containerd[1494]: time="2025-12-12T18:34:07.702692466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-8-48b4194eb4,Uid:56951acf31931780745ba88d87f81def,Namespace:kube-system,Attempt:0,}" Dec 12 18:34:07.710501 kubelet[2316]: E1212 18:34:07.710436 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:07.715595 kubelet[2316]: E1212 18:34:07.715034 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:07.716163 containerd[1494]: time="2025-12-12T18:34:07.715871698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-8-48b4194eb4,Uid:dfef1720dc76009c61501c1368644bb9,Namespace:kube-system,Attempt:0,}" Dec 12 18:34:07.716400 containerd[1494]: time="2025-12-12T18:34:07.716378955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-8-48b4194eb4,Uid:095fccc87867e88fef10e13233f9d38a,Namespace:kube-system,Attempt:0,}" Dec 12 18:34:07.829481 containerd[1494]: time="2025-12-12T18:34:07.829210232Z" level=info msg="connecting to shim 9db5686b15db88154d5c4d0018cc5d411a492e0d196b5d4926cda4a681dd0bac" address="unix:///run/containerd/s/c1e0dfd2684909b9f063d9695d5162e713fb294fb88eb716cc135577d60e097c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:07.835250 containerd[1494]: time="2025-12-12T18:34:07.835102746Z" level=info msg="connecting to shim 82d50235ce047afa4b80c064061e05f02dfbe14a4be8f52b76e580be2a761357" address="unix:///run/containerd/s/e99f67782565dbba1726ee921171a2b4e9e8ea9bd151a7b75858a51e42d6b96c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:07.836060 kubelet[2316]: E1212 18:34:07.835996 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.220.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-8-48b4194eb4?timeout=10s\": dial tcp 134.199.220.206:6443: connect: connection refused" interval="800ms" Dec 12 18:34:07.838495 containerd[1494]: time="2025-12-12T18:34:07.838434926Z" level=info msg="connecting to shim fed471eef56afaae49bbdd3b28452500e77ff1b14423c7c4c1f0733294f3b05b" address="unix:///run/containerd/s/39cf175df2f604e86e2f58a79dd704af9a1f3d11d1512636d0cfb4cbcd8dcbae" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:07.936742 kubelet[2316]: E1212 18:34:07.936583 2316 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://134.199.220.206:6443/api/v1/namespaces/default/events\": dial tcp 134.199.220.206:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-8-48b4194eb4.18808b82a85c16bb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-8-48b4194eb4,UID:ci-4459.2.2-8-48b4194eb4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-8-48b4194eb4,},FirstTimestamp:2025-12-12 18:34:07.215851195 +0000 UTC m=+0.608875753,LastTimestamp:2025-12-12 18:34:07.215851195 +0000 UTC m=+0.608875753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-8-48b4194eb4,}" Dec 12 18:34:07.946508 systemd[1]: Started cri-containerd-82d50235ce047afa4b80c064061e05f02dfbe14a4be8f52b76e580be2a761357.scope - libcontainer container 82d50235ce047afa4b80c064061e05f02dfbe14a4be8f52b76e580be2a761357. Dec 12 18:34:07.954168 systemd[1]: Started cri-containerd-9db5686b15db88154d5c4d0018cc5d411a492e0d196b5d4926cda4a681dd0bac.scope - libcontainer container 9db5686b15db88154d5c4d0018cc5d411a492e0d196b5d4926cda4a681dd0bac. Dec 12 18:34:07.957544 systemd[1]: Started cri-containerd-fed471eef56afaae49bbdd3b28452500e77ff1b14423c7c4c1f0733294f3b05b.scope - libcontainer container fed471eef56afaae49bbdd3b28452500e77ff1b14423c7c4c1f0733294f3b05b. Dec 12 18:34:08.014205 kubelet[2316]: I1212 18:34:08.013782 2316 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:08.014672 kubelet[2316]: E1212 18:34:08.014468 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://134.199.220.206:6443/api/v1/nodes\": dial tcp 134.199.220.206:6443: connect: connection refused" node="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:08.041662 containerd[1494]: time="2025-12-12T18:34:08.041593372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-8-48b4194eb4,Uid:56951acf31931780745ba88d87f81def,Namespace:kube-system,Attempt:0,} returns sandbox id \"9db5686b15db88154d5c4d0018cc5d411a492e0d196b5d4926cda4a681dd0bac\"" Dec 12 18:34:08.045423 kubelet[2316]: E1212 18:34:08.045391 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:08.051522 containerd[1494]: time="2025-12-12T18:34:08.051480364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-8-48b4194eb4,Uid:dfef1720dc76009c61501c1368644bb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"fed471eef56afaae49bbdd3b28452500e77ff1b14423c7c4c1f0733294f3b05b\"" Dec 12 18:34:08.053083 containerd[1494]: time="2025-12-12T18:34:08.053038314Z" level=info msg="CreateContainer within sandbox \"9db5686b15db88154d5c4d0018cc5d411a492e0d196b5d4926cda4a681dd0bac\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:34:08.053667 kubelet[2316]: E1212 18:34:08.053627 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:08.062251 containerd[1494]: time="2025-12-12T18:34:08.062194412Z" level=info msg="CreateContainer within sandbox \"fed471eef56afaae49bbdd3b28452500e77ff1b14423c7c4c1f0733294f3b05b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:34:08.065124 containerd[1494]: time="2025-12-12T18:34:08.065058281Z" level=info msg="Container c7b7c2d4a97b2ce506a93d4fe001ce7ee5356b519258cfaee9e16bf2a27aafb3: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:34:08.080184 containerd[1494]: time="2025-12-12T18:34:08.079961433Z" level=info msg="Container 2b03c456386ccdb452383e2dee35ba85cb41c125cb74b90bd46d7cfecbd53eb6: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:34:08.082612 containerd[1494]: time="2025-12-12T18:34:08.082156163Z" level=info msg="CreateContainer within sandbox \"9db5686b15db88154d5c4d0018cc5d411a492e0d196b5d4926cda4a681dd0bac\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c7b7c2d4a97b2ce506a93d4fe001ce7ee5356b519258cfaee9e16bf2a27aafb3\"" Dec 12 18:34:08.086907 containerd[1494]: time="2025-12-12T18:34:08.086619966Z" level=info msg="StartContainer for \"c7b7c2d4a97b2ce506a93d4fe001ce7ee5356b519258cfaee9e16bf2a27aafb3\"" Dec 12 18:34:08.089985 containerd[1494]: time="2025-12-12T18:34:08.089628810Z" level=info msg="connecting to shim c7b7c2d4a97b2ce506a93d4fe001ce7ee5356b519258cfaee9e16bf2a27aafb3" address="unix:///run/containerd/s/c1e0dfd2684909b9f063d9695d5162e713fb294fb88eb716cc135577d60e097c" protocol=ttrpc version=3 Dec 12 18:34:08.101063 containerd[1494]: time="2025-12-12T18:34:08.100562772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-8-48b4194eb4,Uid:095fccc87867e88fef10e13233f9d38a,Namespace:kube-system,Attempt:0,} returns sandbox id \"82d50235ce047afa4b80c064061e05f02dfbe14a4be8f52b76e580be2a761357\"" Dec 12 18:34:08.103010 kubelet[2316]: E1212 18:34:08.102978 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:08.107030 containerd[1494]: time="2025-12-12T18:34:08.106975211Z" level=info msg="CreateContainer within sandbox \"82d50235ce047afa4b80c064061e05f02dfbe14a4be8f52b76e580be2a761357\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:34:08.108584 containerd[1494]: time="2025-12-12T18:34:08.108550490Z" level=info msg="CreateContainer within sandbox \"fed471eef56afaae49bbdd3b28452500e77ff1b14423c7c4c1f0733294f3b05b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2b03c456386ccdb452383e2dee35ba85cb41c125cb74b90bd46d7cfecbd53eb6\"" Dec 12 18:34:08.109238 containerd[1494]: time="2025-12-12T18:34:08.109180546Z" level=info msg="StartContainer for \"2b03c456386ccdb452383e2dee35ba85cb41c125cb74b90bd46d7cfecbd53eb6\"" Dec 12 18:34:08.110491 containerd[1494]: time="2025-12-12T18:34:08.110460054Z" level=info msg="connecting to shim 2b03c456386ccdb452383e2dee35ba85cb41c125cb74b90bd46d7cfecbd53eb6" address="unix:///run/containerd/s/39cf175df2f604e86e2f58a79dd704af9a1f3d11d1512636d0cfb4cbcd8dcbae" protocol=ttrpc version=3 Dec 12 18:34:08.115526 containerd[1494]: time="2025-12-12T18:34:08.115469533Z" level=info msg="Container 705ab508afaf181966c3cba4c32fa5561ce77c1a118ed4b3f171422a16023db3: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:34:08.128049 containerd[1494]: time="2025-12-12T18:34:08.128001988Z" level=info msg="CreateContainer within sandbox \"82d50235ce047afa4b80c064061e05f02dfbe14a4be8f52b76e580be2a761357\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"705ab508afaf181966c3cba4c32fa5561ce77c1a118ed4b3f171422a16023db3\"" Dec 12 18:34:08.128758 containerd[1494]: time="2025-12-12T18:34:08.128729104Z" level=info msg="StartContainer for \"705ab508afaf181966c3cba4c32fa5561ce77c1a118ed4b3f171422a16023db3\"" Dec 12 18:34:08.129537 systemd[1]: Started cri-containerd-c7b7c2d4a97b2ce506a93d4fe001ce7ee5356b519258cfaee9e16bf2a27aafb3.scope - libcontainer container c7b7c2d4a97b2ce506a93d4fe001ce7ee5356b519258cfaee9e16bf2a27aafb3. Dec 12 18:34:08.132055 containerd[1494]: time="2025-12-12T18:34:08.132002167Z" level=info msg="connecting to shim 705ab508afaf181966c3cba4c32fa5561ce77c1a118ed4b3f171422a16023db3" address="unix:///run/containerd/s/e99f67782565dbba1726ee921171a2b4e9e8ea9bd151a7b75858a51e42d6b96c" protocol=ttrpc version=3 Dec 12 18:34:08.155556 systemd[1]: Started cri-containerd-2b03c456386ccdb452383e2dee35ba85cb41c125cb74b90bd46d7cfecbd53eb6.scope - libcontainer container 2b03c456386ccdb452383e2dee35ba85cb41c125cb74b90bd46d7cfecbd53eb6. Dec 12 18:34:08.181564 kubelet[2316]: E1212 18:34:08.181494 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://134.199.220.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 134.199.220.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:34:08.182959 systemd[1]: Started cri-containerd-705ab508afaf181966c3cba4c32fa5561ce77c1a118ed4b3f171422a16023db3.scope - libcontainer container 705ab508afaf181966c3cba4c32fa5561ce77c1a118ed4b3f171422a16023db3. Dec 12 18:34:08.254647 containerd[1494]: time="2025-12-12T18:34:08.254515653Z" level=info msg="StartContainer for \"c7b7c2d4a97b2ce506a93d4fe001ce7ee5356b519258cfaee9e16bf2a27aafb3\" returns successfully" Dec 12 18:34:08.264216 containerd[1494]: time="2025-12-12T18:34:08.264171002Z" level=info msg="StartContainer for \"2b03c456386ccdb452383e2dee35ba85cb41c125cb74b90bd46d7cfecbd53eb6\" returns successfully" Dec 12 18:34:08.288709 kubelet[2316]: E1212 18:34:08.288500 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://134.199.220.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 134.199.220.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:34:08.294811 kubelet[2316]: E1212 18:34:08.294760 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-8-48b4194eb4\" not found" node="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:08.294953 kubelet[2316]: E1212 18:34:08.294907 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:08.296618 kubelet[2316]: E1212 18:34:08.296387 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://134.199.220.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 134.199.220.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:34:08.307073 kubelet[2316]: E1212 18:34:08.307005 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-8-48b4194eb4\" not found" node="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:08.307852 kubelet[2316]: E1212 18:34:08.307798 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:08.318591 containerd[1494]: time="2025-12-12T18:34:08.318523845Z" level=info msg="StartContainer for \"705ab508afaf181966c3cba4c32fa5561ce77c1a118ed4b3f171422a16023db3\" returns successfully" Dec 12 18:34:08.816329 kubelet[2316]: I1212 18:34:08.816298 2316 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:09.312288 kubelet[2316]: E1212 18:34:09.312174 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-8-48b4194eb4\" not found" node="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:09.312683 kubelet[2316]: E1212 18:34:09.312252 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-8-48b4194eb4\" not found" node="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:09.312683 kubelet[2316]: E1212 18:34:09.312599 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:09.312683 kubelet[2316]: E1212 18:34:09.312652 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:10.313871 kubelet[2316]: E1212 18:34:10.313646 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-8-48b4194eb4\" not found" node="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:10.313871 kubelet[2316]: E1212 18:34:10.313793 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:10.782218 kubelet[2316]: I1212 18:34:10.782162 2316 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:10.783347 kubelet[2316]: E1212 18:34:10.783306 2316 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.2.2-8-48b4194eb4\": node \"ci-4459.2.2-8-48b4194eb4\" not found" Dec 12 18:34:10.832886 kubelet[2316]: I1212 18:34:10.832632 2316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:10.932372 kubelet[2316]: E1212 18:34:10.932288 2316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-8-48b4194eb4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:10.932372 kubelet[2316]: I1212 18:34:10.932324 2316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:10.934963 kubelet[2316]: E1212 18:34:10.934893 2316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-8-48b4194eb4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:10.934963 kubelet[2316]: I1212 18:34:10.934952 2316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:10.937390 kubelet[2316]: E1212 18:34:10.937346 2316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-8-48b4194eb4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:11.208701 kubelet[2316]: I1212 18:34:11.208644 2316 apiserver.go:52] "Watching apiserver" Dec 12 18:34:11.231942 kubelet[2316]: I1212 18:34:11.231901 2316 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:34:11.265619 kubelet[2316]: I1212 18:34:11.265559 2316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:11.268182 kubelet[2316]: E1212 18:34:11.268119 2316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-8-48b4194eb4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:11.268388 kubelet[2316]: E1212 18:34:11.268366 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:11.315110 kubelet[2316]: I1212 18:34:11.315079 2316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:11.317696 kubelet[2316]: E1212 18:34:11.317608 2316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-8-48b4194eb4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:11.317857 kubelet[2316]: E1212 18:34:11.317836 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:11.837991 kubelet[2316]: I1212 18:34:11.837941 2316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:11.849320 kubelet[2316]: I1212 18:34:11.849173 2316 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 18:34:11.849646 kubelet[2316]: E1212 18:34:11.849627 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:12.318589 kubelet[2316]: E1212 18:34:12.318541 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:13.068282 systemd[1]: Reload requested from client PID 2594 ('systemctl') (unit session-7.scope)... Dec 12 18:34:13.068305 systemd[1]: Reloading... Dec 12 18:34:13.201282 zram_generator::config[2633]: No configuration found. Dec 12 18:34:13.487399 systemd[1]: Reloading finished in 418 ms. Dec 12 18:34:13.522128 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:34:13.537891 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:34:13.538357 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:34:13.538500 systemd[1]: kubelet.service: Consumed 1.046s CPU time, 127.7M memory peak. Dec 12 18:34:13.540943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:34:13.728132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:34:13.738724 (kubelet)[2688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:34:13.796105 kubelet[2688]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:34:13.797696 kubelet[2688]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:34:13.797696 kubelet[2688]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:34:13.797696 kubelet[2688]: I1212 18:34:13.796259 2688 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:34:13.810730 kubelet[2688]: I1212 18:34:13.810681 2688 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 18:34:13.810730 kubelet[2688]: I1212 18:34:13.810733 2688 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:34:13.812603 kubelet[2688]: I1212 18:34:13.811255 2688 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:34:13.815590 kubelet[2688]: I1212 18:34:13.815531 2688 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 18:34:13.820750 kubelet[2688]: I1212 18:34:13.820707 2688 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:34:13.825739 kubelet[2688]: I1212 18:34:13.825711 2688 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:34:13.833354 kubelet[2688]: I1212 18:34:13.833297 2688 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:34:13.833609 kubelet[2688]: I1212 18:34:13.833573 2688 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:34:13.833778 kubelet[2688]: I1212 18:34:13.833605 2688 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-8-48b4194eb4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:34:13.833901 kubelet[2688]: I1212 18:34:13.833779 2688 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:34:13.833901 kubelet[2688]: I1212 18:34:13.833794 2688 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 18:34:13.833901 kubelet[2688]: I1212 18:34:13.833860 2688 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:34:13.834090 kubelet[2688]: I1212 18:34:13.834068 2688 kubelet.go:480] "Attempting to sync node with API server" Dec 12 18:34:13.835545 kubelet[2688]: I1212 18:34:13.835488 2688 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:34:13.835685 kubelet[2688]: I1212 18:34:13.835601 2688 kubelet.go:386] "Adding apiserver pod source" Dec 12 18:34:13.835685 kubelet[2688]: I1212 18:34:13.835646 2688 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:34:13.839252 kubelet[2688]: I1212 18:34:13.839140 2688 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:34:13.842437 kubelet[2688]: I1212 18:34:13.842394 2688 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:34:13.850265 kubelet[2688]: I1212 18:34:13.850183 2688 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:34:13.851078 kubelet[2688]: I1212 18:34:13.851054 2688 server.go:1289] "Started kubelet" Dec 12 18:34:13.859139 kubelet[2688]: I1212 18:34:13.858166 2688 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:34:13.877570 kubelet[2688]: I1212 18:34:13.877538 2688 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:34:13.880334 kubelet[2688]: I1212 18:34:13.880309 2688 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:34:13.880682 kubelet[2688]: I1212 18:34:13.880635 2688 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:34:13.881670 kubelet[2688]: I1212 18:34:13.881650 2688 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:34:13.881964 kubelet[2688]: I1212 18:34:13.881952 2688 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:34:13.882477 kubelet[2688]: I1212 18:34:13.882452 2688 server.go:317] "Adding debug handlers to kubelet server" Dec 12 18:34:13.883830 kubelet[2688]: I1212 18:34:13.883785 2688 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:34:13.884138 kubelet[2688]: I1212 18:34:13.884125 2688 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:34:13.885905 kubelet[2688]: I1212 18:34:13.885848 2688 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:34:13.886140 kubelet[2688]: I1212 18:34:13.886092 2688 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:34:13.888075 kubelet[2688]: E1212 18:34:13.887789 2688 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:34:13.888075 kubelet[2688]: I1212 18:34:13.888006 2688 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:34:13.892889 kubelet[2688]: I1212 18:34:13.892835 2688 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 18:34:13.894313 kubelet[2688]: I1212 18:34:13.894272 2688 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 18:34:13.894313 kubelet[2688]: I1212 18:34:13.894299 2688 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 18:34:13.894313 kubelet[2688]: I1212 18:34:13.894321 2688 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:34:13.894550 kubelet[2688]: I1212 18:34:13.894333 2688 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 18:34:13.894550 kubelet[2688]: E1212 18:34:13.894386 2688 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:34:13.954172 kubelet[2688]: I1212 18:34:13.954140 2688 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:34:13.954172 kubelet[2688]: I1212 18:34:13.954160 2688 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:34:13.954172 kubelet[2688]: I1212 18:34:13.954184 2688 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:34:13.954396 kubelet[2688]: I1212 18:34:13.954352 2688 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:34:13.954396 kubelet[2688]: I1212 18:34:13.954362 2688 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:34:13.954396 kubelet[2688]: I1212 18:34:13.954378 2688 policy_none.go:49] "None policy: Start" Dec 12 18:34:13.954396 kubelet[2688]: I1212 18:34:13.954387 2688 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:34:13.954396 kubelet[2688]: I1212 18:34:13.954397 2688 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:34:13.954554 kubelet[2688]: I1212 18:34:13.954486 2688 state_mem.go:75] "Updated machine memory state" Dec 12 18:34:13.958730 kubelet[2688]: E1212 18:34:13.958677 2688 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:34:13.958922 kubelet[2688]: I1212 18:34:13.958861 2688 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:34:13.958922 kubelet[2688]: I1212 18:34:13.958873 2688 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:34:13.964550 kubelet[2688]: I1212 18:34:13.964502 2688 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:34:13.967214 kubelet[2688]: E1212 18:34:13.967181 2688 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:34:13.996343 kubelet[2688]: I1212 18:34:13.996025 2688 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:13.996343 kubelet[2688]: I1212 18:34:13.996100 2688 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:13.996849 kubelet[2688]: I1212 18:34:13.996684 2688 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:14.004168 kubelet[2688]: I1212 18:34:14.004121 2688 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 18:34:14.008514 kubelet[2688]: I1212 18:34:14.008474 2688 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 18:34:14.009969 kubelet[2688]: I1212 18:34:14.009933 2688 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 18:34:14.010153 kubelet[2688]: E1212 18:34:14.010003 2688 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-8-48b4194eb4\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:14.066318 kubelet[2688]: I1212 18:34:14.066182 2688 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:14.081380 kubelet[2688]: I1212 18:34:14.081340 2688 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:14.081563 kubelet[2688]: I1212 18:34:14.081444 2688 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:14.183478 kubelet[2688]: I1212 18:34:14.183382 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dfef1720dc76009c61501c1368644bb9-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-8-48b4194eb4\" (UID: \"dfef1720dc76009c61501c1368644bb9\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:14.183478 kubelet[2688]: I1212 18:34:14.183438 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/095fccc87867e88fef10e13233f9d38a-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-8-48b4194eb4\" (UID: \"095fccc87867e88fef10e13233f9d38a\") " pod="kube-system/kube-scheduler-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:14.183977 kubelet[2688]: I1212 18:34:14.183727 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56951acf31931780745ba88d87f81def-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-8-48b4194eb4\" (UID: \"56951acf31931780745ba88d87f81def\") " pod="kube-system/kube-apiserver-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:14.183977 kubelet[2688]: I1212 18:34:14.183809 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dfef1720dc76009c61501c1368644bb9-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-8-48b4194eb4\" (UID: \"dfef1720dc76009c61501c1368644bb9\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:14.183977 kubelet[2688]: I1212 18:34:14.183840 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dfef1720dc76009c61501c1368644bb9-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-8-48b4194eb4\" (UID: \"dfef1720dc76009c61501c1368644bb9\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:14.183977 kubelet[2688]: I1212 18:34:14.183873 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dfef1720dc76009c61501c1368644bb9-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-8-48b4194eb4\" (UID: \"dfef1720dc76009c61501c1368644bb9\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:14.183977 kubelet[2688]: I1212 18:34:14.183894 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dfef1720dc76009c61501c1368644bb9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-8-48b4194eb4\" (UID: \"dfef1720dc76009c61501c1368644bb9\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:14.184152 kubelet[2688]: I1212 18:34:14.183912 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56951acf31931780745ba88d87f81def-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-8-48b4194eb4\" (UID: \"56951acf31931780745ba88d87f81def\") " pod="kube-system/kube-apiserver-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:14.184152 kubelet[2688]: I1212 18:34:14.183927 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56951acf31931780745ba88d87f81def-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-8-48b4194eb4\" (UID: \"56951acf31931780745ba88d87f81def\") " pod="kube-system/kube-apiserver-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:14.305558 kubelet[2688]: E1212 18:34:14.305042 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:14.309373 kubelet[2688]: E1212 18:34:14.308919 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:14.311244 kubelet[2688]: E1212 18:34:14.310892 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:14.838712 kubelet[2688]: I1212 18:34:14.838669 2688 apiserver.go:52] "Watching apiserver" Dec 12 18:34:14.882610 kubelet[2688]: I1212 18:34:14.882561 2688 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:34:14.895531 kubelet[2688]: I1212 18:34:14.895152 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-8-48b4194eb4" podStartSLOduration=0.89513594 podStartE2EDuration="895.13594ms" podCreationTimestamp="2025-12-12 18:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:34:14.894713053 +0000 UTC m=+1.147994169" watchObservedRunningTime="2025-12-12 18:34:14.89513594 +0000 UTC m=+1.148417056" Dec 12 18:34:14.924774 kubelet[2688]: I1212 18:34:14.924719 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-8-48b4194eb4" podStartSLOduration=3.92469932 podStartE2EDuration="3.92469932s" podCreationTimestamp="2025-12-12 18:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:34:14.911371743 +0000 UTC m=+1.164652861" watchObservedRunningTime="2025-12-12 18:34:14.92469932 +0000 UTC m=+1.177980429" Dec 12 18:34:14.939291 kubelet[2688]: I1212 18:34:14.937411 2688 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:14.939291 kubelet[2688]: I1212 18:34:14.937940 2688 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:14.939291 kubelet[2688]: E1212 18:34:14.938345 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:14.957423 kubelet[2688]: I1212 18:34:14.956392 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-8-48b4194eb4" podStartSLOduration=0.956370011 podStartE2EDuration="956.370011ms" podCreationTimestamp="2025-12-12 18:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:34:14.927185744 +0000 UTC m=+1.180466855" watchObservedRunningTime="2025-12-12 18:34:14.956370011 +0000 UTC m=+1.209651129" Dec 12 18:34:14.959755 kubelet[2688]: I1212 18:34:14.959392 2688 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 18:34:14.959755 kubelet[2688]: I1212 18:34:14.959402 2688 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 18:34:14.959755 kubelet[2688]: E1212 18:34:14.959450 2688 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-8-48b4194eb4\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:14.959755 kubelet[2688]: E1212 18:34:14.959468 2688 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-8-48b4194eb4\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:14.959755 kubelet[2688]: E1212 18:34:14.959644 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:14.960026 kubelet[2688]: E1212 18:34:14.960012 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:15.941276 kubelet[2688]: E1212 18:34:15.940349 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:15.941276 kubelet[2688]: E1212 18:34:15.940404 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:15.941276 kubelet[2688]: E1212 18:34:15.940817 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:16.941532 kubelet[2688]: E1212 18:34:16.941497 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:16.942530 kubelet[2688]: E1212 18:34:16.942492 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:18.018598 kubelet[2688]: E1212 18:34:18.018537 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:18.944976 kubelet[2688]: E1212 18:34:18.944864 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:19.948008 kubelet[2688]: E1212 18:34:19.947915 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:20.068295 kubelet[2688]: I1212 18:34:20.068255 2688 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:34:20.068808 containerd[1494]: time="2025-12-12T18:34:20.068748490Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:34:20.069777 kubelet[2688]: I1212 18:34:20.069427 2688 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:34:20.821109 kubelet[2688]: I1212 18:34:20.820866 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/76846f22-7535-4e8f-b573-163d6ed8ecef-kube-proxy\") pod \"kube-proxy-49cgs\" (UID: \"76846f22-7535-4e8f-b573-163d6ed8ecef\") " pod="kube-system/kube-proxy-49cgs" Dec 12 18:34:20.821109 kubelet[2688]: I1212 18:34:20.820921 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76846f22-7535-4e8f-b573-163d6ed8ecef-lib-modules\") pod \"kube-proxy-49cgs\" (UID: \"76846f22-7535-4e8f-b573-163d6ed8ecef\") " pod="kube-system/kube-proxy-49cgs" Dec 12 18:34:20.821109 kubelet[2688]: I1212 18:34:20.820959 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxqcf\" (UniqueName: \"kubernetes.io/projected/76846f22-7535-4e8f-b573-163d6ed8ecef-kube-api-access-sxqcf\") pod \"kube-proxy-49cgs\" (UID: \"76846f22-7535-4e8f-b573-163d6ed8ecef\") " pod="kube-system/kube-proxy-49cgs" Dec 12 18:34:20.821109 kubelet[2688]: I1212 18:34:20.820985 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76846f22-7535-4e8f-b573-163d6ed8ecef-xtables-lock\") pod \"kube-proxy-49cgs\" (UID: \"76846f22-7535-4e8f-b573-163d6ed8ecef\") " pod="kube-system/kube-proxy-49cgs" Dec 12 18:34:20.828832 systemd[1]: Created slice kubepods-besteffort-pod76846f22_7535_4e8f_b573_163d6ed8ecef.slice - libcontainer container kubepods-besteffort-pod76846f22_7535_4e8f_b573_163d6ed8ecef.slice. Dec 12 18:34:20.976277 systemd[1]: Created slice kubepods-besteffort-pod56cb8d31_7d24_439c_a2d0_b6b2bdbd19e7.slice - libcontainer container kubepods-besteffort-pod56cb8d31_7d24_439c_a2d0_b6b2bdbd19e7.slice. Dec 12 18:34:21.022478 kubelet[2688]: I1212 18:34:21.022403 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmcs2\" (UniqueName: \"kubernetes.io/projected/56cb8d31-7d24-439c-a2d0-b6b2bdbd19e7-kube-api-access-zmcs2\") pod \"tigera-operator-7dcd859c48-kr5nj\" (UID: \"56cb8d31-7d24-439c-a2d0-b6b2bdbd19e7\") " pod="tigera-operator/tigera-operator-7dcd859c48-kr5nj" Dec 12 18:34:21.022478 kubelet[2688]: I1212 18:34:21.022458 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/56cb8d31-7d24-439c-a2d0-b6b2bdbd19e7-var-lib-calico\") pod \"tigera-operator-7dcd859c48-kr5nj\" (UID: \"56cb8d31-7d24-439c-a2d0-b6b2bdbd19e7\") " pod="tigera-operator/tigera-operator-7dcd859c48-kr5nj" Dec 12 18:34:21.137301 kubelet[2688]: E1212 18:34:21.136915 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:21.139250 containerd[1494]: time="2025-12-12T18:34:21.139178282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-49cgs,Uid:76846f22-7535-4e8f-b573-163d6ed8ecef,Namespace:kube-system,Attempt:0,}" Dec 12 18:34:21.167987 containerd[1494]: time="2025-12-12T18:34:21.167870837Z" level=info msg="connecting to shim 0a1dbcad3fc8328c234bb2853fd8b9df08697a6d57ae6dd6caf89d3189470e11" address="unix:///run/containerd/s/cc6df802d49eae5bc25fcd293bd9c189407afa3a93552c39ed9e88736105bb6a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:21.206535 systemd[1]: Started cri-containerd-0a1dbcad3fc8328c234bb2853fd8b9df08697a6d57ae6dd6caf89d3189470e11.scope - libcontainer container 0a1dbcad3fc8328c234bb2853fd8b9df08697a6d57ae6dd6caf89d3189470e11. Dec 12 18:34:21.245928 containerd[1494]: time="2025-12-12T18:34:21.245878434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-49cgs,Uid:76846f22-7535-4e8f-b573-163d6ed8ecef,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a1dbcad3fc8328c234bb2853fd8b9df08697a6d57ae6dd6caf89d3189470e11\"" Dec 12 18:34:21.247690 kubelet[2688]: E1212 18:34:21.247608 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:21.253660 containerd[1494]: time="2025-12-12T18:34:21.253571320Z" level=info msg="CreateContainer within sandbox \"0a1dbcad3fc8328c234bb2853fd8b9df08697a6d57ae6dd6caf89d3189470e11\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:34:21.270257 containerd[1494]: time="2025-12-12T18:34:21.270169058Z" level=info msg="Container 0a8c354d46f81f7839a631fe683173c2d6fa0310de8c7b294df826bed07256db: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:34:21.282253 containerd[1494]: time="2025-12-12T18:34:21.280964840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kr5nj,Uid:56cb8d31-7d24-439c-a2d0-b6b2bdbd19e7,Namespace:tigera-operator,Attempt:0,}" Dec 12 18:34:21.282253 containerd[1494]: time="2025-12-12T18:34:21.281582637Z" level=info msg="CreateContainer within sandbox \"0a1dbcad3fc8328c234bb2853fd8b9df08697a6d57ae6dd6caf89d3189470e11\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0a8c354d46f81f7839a631fe683173c2d6fa0310de8c7b294df826bed07256db\"" Dec 12 18:34:21.283127 containerd[1494]: time="2025-12-12T18:34:21.283096821Z" level=info msg="StartContainer for \"0a8c354d46f81f7839a631fe683173c2d6fa0310de8c7b294df826bed07256db\"" Dec 12 18:34:21.291577 containerd[1494]: time="2025-12-12T18:34:21.291510640Z" level=info msg="connecting to shim 0a8c354d46f81f7839a631fe683173c2d6fa0310de8c7b294df826bed07256db" address="unix:///run/containerd/s/cc6df802d49eae5bc25fcd293bd9c189407afa3a93552c39ed9e88736105bb6a" protocol=ttrpc version=3 Dec 12 18:34:21.308995 containerd[1494]: time="2025-12-12T18:34:21.308936868Z" level=info msg="connecting to shim d8070ebc14199198caba24612e483ad6ce1db3aca41fea7f89c311b34cefa2e5" address="unix:///run/containerd/s/dbb452d19f872698207c689ba6ee84950e10d81883f09364f65853fc49c07a07" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:21.325516 systemd[1]: Started cri-containerd-0a8c354d46f81f7839a631fe683173c2d6fa0310de8c7b294df826bed07256db.scope - libcontainer container 0a8c354d46f81f7839a631fe683173c2d6fa0310de8c7b294df826bed07256db. Dec 12 18:34:21.351468 systemd[1]: Started cri-containerd-d8070ebc14199198caba24612e483ad6ce1db3aca41fea7f89c311b34cefa2e5.scope - libcontainer container d8070ebc14199198caba24612e483ad6ce1db3aca41fea7f89c311b34cefa2e5. Dec 12 18:34:21.424178 containerd[1494]: time="2025-12-12T18:34:21.423947836Z" level=info msg="StartContainer for \"0a8c354d46f81f7839a631fe683173c2d6fa0310de8c7b294df826bed07256db\" returns successfully" Dec 12 18:34:21.432316 containerd[1494]: time="2025-12-12T18:34:21.431456594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kr5nj,Uid:56cb8d31-7d24-439c-a2d0-b6b2bdbd19e7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d8070ebc14199198caba24612e483ad6ce1db3aca41fea7f89c311b34cefa2e5\"" Dec 12 18:34:21.436325 containerd[1494]: time="2025-12-12T18:34:21.436282209Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 18:34:21.440595 systemd-resolved[1375]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Dec 12 18:34:21.947558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1402295386.mount: Deactivated successfully. Dec 12 18:34:21.957804 kubelet[2688]: E1212 18:34:21.957683 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:22.605611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount518124797.mount: Deactivated successfully. Dec 12 18:34:24.305471 containerd[1494]: time="2025-12-12T18:34:24.305295514Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:24.306566 containerd[1494]: time="2025-12-12T18:34:24.306257067Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 12 18:34:24.307052 containerd[1494]: time="2025-12-12T18:34:24.307020865Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:24.308976 containerd[1494]: time="2025-12-12T18:34:24.308941199Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:24.309969 containerd[1494]: time="2025-12-12T18:34:24.309936138Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.872730136s" Dec 12 18:34:24.310097 containerd[1494]: time="2025-12-12T18:34:24.310070519Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 12 18:34:24.313715 containerd[1494]: time="2025-12-12T18:34:24.313667674Z" level=info msg="CreateContainer within sandbox \"d8070ebc14199198caba24612e483ad6ce1db3aca41fea7f89c311b34cefa2e5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 18:34:24.322812 containerd[1494]: time="2025-12-12T18:34:24.322767238Z" level=info msg="Container 395d6d8abdb17de28a4d9359fa2c0295fd36a2719a7d83679003246b48214e51: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:34:24.329605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3352153597.mount: Deactivated successfully. Dec 12 18:34:24.333487 containerd[1494]: time="2025-12-12T18:34:24.333424434Z" level=info msg="CreateContainer within sandbox \"d8070ebc14199198caba24612e483ad6ce1db3aca41fea7f89c311b34cefa2e5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"395d6d8abdb17de28a4d9359fa2c0295fd36a2719a7d83679003246b48214e51\"" Dec 12 18:34:24.334131 containerd[1494]: time="2025-12-12T18:34:24.334104812Z" level=info msg="StartContainer for \"395d6d8abdb17de28a4d9359fa2c0295fd36a2719a7d83679003246b48214e51\"" Dec 12 18:34:24.336890 containerd[1494]: time="2025-12-12T18:34:24.336838269Z" level=info msg="connecting to shim 395d6d8abdb17de28a4d9359fa2c0295fd36a2719a7d83679003246b48214e51" address="unix:///run/containerd/s/dbb452d19f872698207c689ba6ee84950e10d81883f09364f65853fc49c07a07" protocol=ttrpc version=3 Dec 12 18:34:24.376559 systemd[1]: Started cri-containerd-395d6d8abdb17de28a4d9359fa2c0295fd36a2719a7d83679003246b48214e51.scope - libcontainer container 395d6d8abdb17de28a4d9359fa2c0295fd36a2719a7d83679003246b48214e51. Dec 12 18:34:24.420272 containerd[1494]: time="2025-12-12T18:34:24.419816221Z" level=info msg="StartContainer for \"395d6d8abdb17de28a4d9359fa2c0295fd36a2719a7d83679003246b48214e51\" returns successfully" Dec 12 18:34:24.983804 kubelet[2688]: I1212 18:34:24.983211 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-49cgs" podStartSLOduration=4.983186418 podStartE2EDuration="4.983186418s" podCreationTimestamp="2025-12-12 18:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:34:21.972486069 +0000 UTC m=+8.225767190" watchObservedRunningTime="2025-12-12 18:34:24.983186418 +0000 UTC m=+11.236467536" Dec 12 18:34:25.458846 kubelet[2688]: E1212 18:34:25.458666 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:25.477114 kubelet[2688]: I1212 18:34:25.476791 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-kr5nj" podStartSLOduration=2.600824613 podStartE2EDuration="5.47676332s" podCreationTimestamp="2025-12-12 18:34:20 +0000 UTC" firstStartedPulling="2025-12-12 18:34:21.435057092 +0000 UTC m=+7.688338202" lastFinishedPulling="2025-12-12 18:34:24.3109958 +0000 UTC m=+10.564276909" observedRunningTime="2025-12-12 18:34:24.983743079 +0000 UTC m=+11.237024179" watchObservedRunningTime="2025-12-12 18:34:25.47676332 +0000 UTC m=+11.730044439" Dec 12 18:34:26.743150 kubelet[2688]: E1212 18:34:26.743107 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:27.924908 update_engine[1469]: I20251212 18:34:27.923839 1469 update_attempter.cc:509] Updating boot flags... Dec 12 18:34:29.591849 sudo[1753]: pam_unix(sudo:session): session closed for user root Dec 12 18:34:29.598882 sshd[1752]: Connection closed by 147.75.109.163 port 55106 Dec 12 18:34:29.600962 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Dec 12 18:34:29.607659 systemd[1]: sshd@6-134.199.220.206:22-147.75.109.163:55106.service: Deactivated successfully. Dec 12 18:34:29.612982 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:34:29.614388 systemd[1]: session-7.scope: Consumed 6.412s CPU time, 161.2M memory peak. Dec 12 18:34:29.618021 systemd-logind[1468]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:34:29.625316 systemd-logind[1468]: Removed session 7. Dec 12 18:34:35.980199 systemd[1]: Created slice kubepods-besteffort-pod53f953a7_f878_4db6_9b2f_b3a91b78a143.slice - libcontainer container kubepods-besteffort-pod53f953a7_f878_4db6_9b2f_b3a91b78a143.slice. Dec 12 18:34:36.026080 kubelet[2688]: I1212 18:34:36.025933 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzc2z\" (UniqueName: \"kubernetes.io/projected/53f953a7-f878-4db6-9b2f-b3a91b78a143-kube-api-access-mzc2z\") pod \"calico-typha-5cf87dcff5-6p5rh\" (UID: \"53f953a7-f878-4db6-9b2f-b3a91b78a143\") " pod="calico-system/calico-typha-5cf87dcff5-6p5rh" Dec 12 18:34:36.028084 kubelet[2688]: I1212 18:34:36.027339 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/53f953a7-f878-4db6-9b2f-b3a91b78a143-typha-certs\") pod \"calico-typha-5cf87dcff5-6p5rh\" (UID: \"53f953a7-f878-4db6-9b2f-b3a91b78a143\") " pod="calico-system/calico-typha-5cf87dcff5-6p5rh" Dec 12 18:34:36.028084 kubelet[2688]: I1212 18:34:36.027378 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53f953a7-f878-4db6-9b2f-b3a91b78a143-tigera-ca-bundle\") pod \"calico-typha-5cf87dcff5-6p5rh\" (UID: \"53f953a7-f878-4db6-9b2f-b3a91b78a143\") " pod="calico-system/calico-typha-5cf87dcff5-6p5rh" Dec 12 18:34:36.204944 systemd[1]: Created slice kubepods-besteffort-pod731c1b61_9924_4af3_9111_38f1cd1b961a.slice - libcontainer container kubepods-besteffort-pod731c1b61_9924_4af3_9111_38f1cd1b961a.slice. Dec 12 18:34:36.229885 kubelet[2688]: I1212 18:34:36.229302 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/731c1b61-9924-4af3-9111-38f1cd1b961a-cni-log-dir\") pod \"calico-node-l2gt7\" (UID: \"731c1b61-9924-4af3-9111-38f1cd1b961a\") " pod="calico-system/calico-node-l2gt7" Dec 12 18:34:36.229885 kubelet[2688]: I1212 18:34:36.229405 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/731c1b61-9924-4af3-9111-38f1cd1b961a-node-certs\") pod \"calico-node-l2gt7\" (UID: \"731c1b61-9924-4af3-9111-38f1cd1b961a\") " pod="calico-system/calico-node-l2gt7" Dec 12 18:34:36.229885 kubelet[2688]: I1212 18:34:36.229517 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/731c1b61-9924-4af3-9111-38f1cd1b961a-flexvol-driver-host\") pod \"calico-node-l2gt7\" (UID: \"731c1b61-9924-4af3-9111-38f1cd1b961a\") " pod="calico-system/calico-node-l2gt7" Dec 12 18:34:36.229885 kubelet[2688]: I1212 18:34:36.229546 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/731c1b61-9924-4af3-9111-38f1cd1b961a-policysync\") pod \"calico-node-l2gt7\" (UID: \"731c1b61-9924-4af3-9111-38f1cd1b961a\") " pod="calico-system/calico-node-l2gt7" Dec 12 18:34:36.229885 kubelet[2688]: I1212 18:34:36.229572 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/731c1b61-9924-4af3-9111-38f1cd1b961a-cni-bin-dir\") pod \"calico-node-l2gt7\" (UID: \"731c1b61-9924-4af3-9111-38f1cd1b961a\") " pod="calico-system/calico-node-l2gt7" Dec 12 18:34:36.230684 kubelet[2688]: I1212 18:34:36.229595 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/731c1b61-9924-4af3-9111-38f1cd1b961a-cni-net-dir\") pod \"calico-node-l2gt7\" (UID: \"731c1b61-9924-4af3-9111-38f1cd1b961a\") " pod="calico-system/calico-node-l2gt7" Dec 12 18:34:36.230684 kubelet[2688]: I1212 18:34:36.229657 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/731c1b61-9924-4af3-9111-38f1cd1b961a-lib-modules\") pod \"calico-node-l2gt7\" (UID: \"731c1b61-9924-4af3-9111-38f1cd1b961a\") " pod="calico-system/calico-node-l2gt7" Dec 12 18:34:36.230684 kubelet[2688]: I1212 18:34:36.229679 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/731c1b61-9924-4af3-9111-38f1cd1b961a-var-run-calico\") pod \"calico-node-l2gt7\" (UID: \"731c1b61-9924-4af3-9111-38f1cd1b961a\") " pod="calico-system/calico-node-l2gt7" Dec 12 18:34:36.230684 kubelet[2688]: I1212 18:34:36.229707 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/731c1b61-9924-4af3-9111-38f1cd1b961a-tigera-ca-bundle\") pod \"calico-node-l2gt7\" (UID: \"731c1b61-9924-4af3-9111-38f1cd1b961a\") " pod="calico-system/calico-node-l2gt7" Dec 12 18:34:36.230684 kubelet[2688]: I1212 18:34:36.229738 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/731c1b61-9924-4af3-9111-38f1cd1b961a-xtables-lock\") pod \"calico-node-l2gt7\" (UID: \"731c1b61-9924-4af3-9111-38f1cd1b961a\") " pod="calico-system/calico-node-l2gt7" Dec 12 18:34:36.230985 kubelet[2688]: I1212 18:34:36.229764 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b8jz\" (UniqueName: \"kubernetes.io/projected/731c1b61-9924-4af3-9111-38f1cd1b961a-kube-api-access-4b8jz\") pod \"calico-node-l2gt7\" (UID: \"731c1b61-9924-4af3-9111-38f1cd1b961a\") " pod="calico-system/calico-node-l2gt7" Dec 12 18:34:36.230985 kubelet[2688]: I1212 18:34:36.229788 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/731c1b61-9924-4af3-9111-38f1cd1b961a-var-lib-calico\") pod \"calico-node-l2gt7\" (UID: \"731c1b61-9924-4af3-9111-38f1cd1b961a\") " pod="calico-system/calico-node-l2gt7" Dec 12 18:34:36.285808 kubelet[2688]: E1212 18:34:36.285519 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:36.287356 containerd[1494]: time="2025-12-12T18:34:36.287305814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cf87dcff5-6p5rh,Uid:53f953a7-f878-4db6-9b2f-b3a91b78a143,Namespace:calico-system,Attempt:0,}" Dec 12 18:34:36.318057 containerd[1494]: time="2025-12-12T18:34:36.317923246Z" level=info msg="connecting to shim 60917a6099a9ca4810da76abd4130c84989d39c1da2573567144f998b26dbe1a" address="unix:///run/containerd/s/cd2f35f07f3e8272cb94d311e439c70d9a7efa3b89580081c47e5b4442566d66" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:36.383171 kubelet[2688]: E1212 18:34:36.382729 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.383171 kubelet[2688]: W1212 18:34:36.382756 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.385379 kubelet[2688]: E1212 18:34:36.383976 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.398478 systemd[1]: Started cri-containerd-60917a6099a9ca4810da76abd4130c84989d39c1da2573567144f998b26dbe1a.scope - libcontainer container 60917a6099a9ca4810da76abd4130c84989d39c1da2573567144f998b26dbe1a. Dec 12 18:34:36.468561 kubelet[2688]: E1212 18:34:36.468501 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2sqb" podUID="c6e78d63-2cda-428b-a981-9d8b48e5f776" Dec 12 18:34:36.509712 kubelet[2688]: E1212 18:34:36.509588 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:36.510691 containerd[1494]: time="2025-12-12T18:34:36.510573901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l2gt7,Uid:731c1b61-9924-4af3-9111-38f1cd1b961a,Namespace:calico-system,Attempt:0,}" Dec 12 18:34:36.526077 kubelet[2688]: E1212 18:34:36.524648 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.526077 kubelet[2688]: W1212 18:34:36.524681 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.526077 kubelet[2688]: E1212 18:34:36.524705 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.526077 kubelet[2688]: E1212 18:34:36.524926 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.526077 kubelet[2688]: W1212 18:34:36.524936 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.526077 kubelet[2688]: E1212 18:34:36.524948 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.526077 kubelet[2688]: E1212 18:34:36.525302 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.526077 kubelet[2688]: W1212 18:34:36.525313 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.526077 kubelet[2688]: E1212 18:34:36.525328 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.526851 kubelet[2688]: E1212 18:34:36.526148 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.526851 kubelet[2688]: W1212 18:34:36.526160 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.526851 kubelet[2688]: E1212 18:34:36.526173 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.526851 kubelet[2688]: E1212 18:34:36.526392 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.526851 kubelet[2688]: W1212 18:34:36.526400 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.526851 kubelet[2688]: E1212 18:34:36.526409 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.526851 kubelet[2688]: E1212 18:34:36.526565 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.526851 kubelet[2688]: W1212 18:34:36.526572 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.527717 kubelet[2688]: E1212 18:34:36.527683 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.528065 kubelet[2688]: E1212 18:34:36.528047 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.528065 kubelet[2688]: W1212 18:34:36.528061 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.528486 kubelet[2688]: E1212 18:34:36.528463 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.529454 kubelet[2688]: E1212 18:34:36.529393 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.529695 kubelet[2688]: W1212 18:34:36.529673 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.529825 kubelet[2688]: E1212 18:34:36.529697 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.530738 kubelet[2688]: E1212 18:34:36.530518 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.530738 kubelet[2688]: W1212 18:34:36.530735 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.530817 kubelet[2688]: E1212 18:34:36.530752 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.532011 kubelet[2688]: E1212 18:34:36.531974 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.532011 kubelet[2688]: W1212 18:34:36.531990 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.532011 kubelet[2688]: E1212 18:34:36.532002 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.532757 kubelet[2688]: E1212 18:34:36.532614 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.532757 kubelet[2688]: W1212 18:34:36.532628 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.532757 kubelet[2688]: E1212 18:34:36.532640 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.534799 kubelet[2688]: E1212 18:34:36.534742 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.534799 kubelet[2688]: W1212 18:34:36.534767 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.535071 kubelet[2688]: E1212 18:34:36.534821 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.535218 kubelet[2688]: E1212 18:34:36.535204 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.535270 kubelet[2688]: W1212 18:34:36.535220 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.535270 kubelet[2688]: E1212 18:34:36.535250 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.535506 kubelet[2688]: E1212 18:34:36.535438 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.535506 kubelet[2688]: W1212 18:34:36.535451 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.535687 kubelet[2688]: E1212 18:34:36.535464 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.536323 kubelet[2688]: E1212 18:34:36.536260 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.536323 kubelet[2688]: W1212 18:34:36.536274 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.536323 kubelet[2688]: E1212 18:34:36.536286 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.537273 kubelet[2688]: E1212 18:34:36.537050 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.537273 kubelet[2688]: W1212 18:34:36.537068 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.537273 kubelet[2688]: E1212 18:34:36.537176 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.538054 kubelet[2688]: E1212 18:34:36.538034 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.538054 kubelet[2688]: W1212 18:34:36.538049 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.538054 kubelet[2688]: E1212 18:34:36.538061 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.540272 kubelet[2688]: E1212 18:34:36.539250 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.540272 kubelet[2688]: W1212 18:34:36.539268 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.540272 kubelet[2688]: E1212 18:34:36.539381 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.540272 kubelet[2688]: E1212 18:34:36.539910 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.540272 kubelet[2688]: W1212 18:34:36.539921 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.540272 kubelet[2688]: E1212 18:34:36.539932 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.542035 kubelet[2688]: E1212 18:34:36.541994 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.542035 kubelet[2688]: W1212 18:34:36.542027 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.542272 kubelet[2688]: E1212 18:34:36.542046 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.543266 kubelet[2688]: E1212 18:34:36.543243 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.543266 kubelet[2688]: W1212 18:34:36.543261 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.543377 kubelet[2688]: E1212 18:34:36.543280 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.543562 kubelet[2688]: I1212 18:34:36.543541 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c6e78d63-2cda-428b-a981-9d8b48e5f776-socket-dir\") pod \"csi-node-driver-g2sqb\" (UID: \"c6e78d63-2cda-428b-a981-9d8b48e5f776\") " pod="calico-system/csi-node-driver-g2sqb" Dec 12 18:34:36.543650 containerd[1494]: time="2025-12-12T18:34:36.543539899Z" level=info msg="connecting to shim 570c637b01b4e406915eea5b25291d318f513957a7ea87fffdcb254b49e8983b" address="unix:///run/containerd/s/ca374a205cb3b7b74cbbb7b1c5b08391ff107b14bcec42ec03901a6cec5e2a21" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:36.544945 kubelet[2688]: E1212 18:34:36.544921 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.544945 kubelet[2688]: W1212 18:34:36.544940 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.545091 kubelet[2688]: E1212 18:34:36.544955 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.545938 kubelet[2688]: E1212 18:34:36.545914 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.545938 kubelet[2688]: W1212 18:34:36.545933 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.546053 kubelet[2688]: E1212 18:34:36.545950 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.546453 kubelet[2688]: E1212 18:34:36.546435 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.546453 kubelet[2688]: W1212 18:34:36.546450 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.546546 kubelet[2688]: E1212 18:34:36.546463 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.546884 kubelet[2688]: I1212 18:34:36.546860 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c6e78d63-2cda-428b-a981-9d8b48e5f776-varrun\") pod \"csi-node-driver-g2sqb\" (UID: \"c6e78d63-2cda-428b-a981-9d8b48e5f776\") " pod="calico-system/csi-node-driver-g2sqb" Dec 12 18:34:36.548418 kubelet[2688]: E1212 18:34:36.548386 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.548418 kubelet[2688]: W1212 18:34:36.548408 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.548535 kubelet[2688]: E1212 18:34:36.548425 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.550032 kubelet[2688]: E1212 18:34:36.549995 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.550032 kubelet[2688]: W1212 18:34:36.550019 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.550032 kubelet[2688]: E1212 18:34:36.550037 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.551710 kubelet[2688]: E1212 18:34:36.551682 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.551710 kubelet[2688]: W1212 18:34:36.551704 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.551843 kubelet[2688]: E1212 18:34:36.551721 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.551843 kubelet[2688]: I1212 18:34:36.551765 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c6e78d63-2cda-428b-a981-9d8b48e5f776-registration-dir\") pod \"csi-node-driver-g2sqb\" (UID: \"c6e78d63-2cda-428b-a981-9d8b48e5f776\") " pod="calico-system/csi-node-driver-g2sqb" Dec 12 18:34:36.552133 kubelet[2688]: E1212 18:34:36.552098 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.552133 kubelet[2688]: W1212 18:34:36.552113 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.552133 kubelet[2688]: E1212 18:34:36.552128 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.552294 kubelet[2688]: I1212 18:34:36.552158 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlvsq\" (UniqueName: \"kubernetes.io/projected/c6e78d63-2cda-428b-a981-9d8b48e5f776-kube-api-access-dlvsq\") pod \"csi-node-driver-g2sqb\" (UID: \"c6e78d63-2cda-428b-a981-9d8b48e5f776\") " pod="calico-system/csi-node-driver-g2sqb" Dec 12 18:34:36.554364 kubelet[2688]: E1212 18:34:36.554286 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.554364 kubelet[2688]: W1212 18:34:36.554317 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.554364 kubelet[2688]: E1212 18:34:36.554344 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.555116 kubelet[2688]: I1212 18:34:36.555072 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6e78d63-2cda-428b-a981-9d8b48e5f776-kubelet-dir\") pod \"csi-node-driver-g2sqb\" (UID: \"c6e78d63-2cda-428b-a981-9d8b48e5f776\") " pod="calico-system/csi-node-driver-g2sqb" Dec 12 18:34:36.555745 kubelet[2688]: E1212 18:34:36.555724 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.555745 kubelet[2688]: W1212 18:34:36.555743 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.555979 kubelet[2688]: E1212 18:34:36.555761 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.556932 kubelet[2688]: E1212 18:34:36.556903 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.556932 kubelet[2688]: W1212 18:34:36.556924 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.557137 kubelet[2688]: E1212 18:34:36.556941 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.558429 kubelet[2688]: E1212 18:34:36.558402 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.558429 kubelet[2688]: W1212 18:34:36.558420 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.558643 kubelet[2688]: E1212 18:34:36.558438 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.558680 kubelet[2688]: E1212 18:34:36.558656 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.558680 kubelet[2688]: W1212 18:34:36.558665 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.558680 kubelet[2688]: E1212 18:34:36.558674 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.558955 kubelet[2688]: E1212 18:34:36.558864 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.558955 kubelet[2688]: W1212 18:34:36.558876 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.558955 kubelet[2688]: E1212 18:34:36.558884 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.559086 kubelet[2688]: E1212 18:34:36.559039 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.559086 kubelet[2688]: W1212 18:34:36.559046 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.559086 kubelet[2688]: E1212 18:34:36.559053 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.579504 systemd[1]: Started cri-containerd-570c637b01b4e406915eea5b25291d318f513957a7ea87fffdcb254b49e8983b.scope - libcontainer container 570c637b01b4e406915eea5b25291d318f513957a7ea87fffdcb254b49e8983b. Dec 12 18:34:36.626750 containerd[1494]: time="2025-12-12T18:34:36.626695918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l2gt7,Uid:731c1b61-9924-4af3-9111-38f1cd1b961a,Namespace:calico-system,Attempt:0,} returns sandbox id \"570c637b01b4e406915eea5b25291d318f513957a7ea87fffdcb254b49e8983b\"" Dec 12 18:34:36.630288 kubelet[2688]: E1212 18:34:36.629030 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:36.631917 containerd[1494]: time="2025-12-12T18:34:36.631871496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 18:34:36.659157 kubelet[2688]: E1212 18:34:36.658885 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.659157 kubelet[2688]: W1212 18:34:36.658921 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.659157 kubelet[2688]: E1212 18:34:36.658950 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.660047 kubelet[2688]: E1212 18:34:36.659959 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.660047 kubelet[2688]: W1212 18:34:36.659983 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.660047 kubelet[2688]: E1212 18:34:36.660009 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.660926 kubelet[2688]: E1212 18:34:36.660600 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.660926 kubelet[2688]: W1212 18:34:36.660616 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.660926 kubelet[2688]: E1212 18:34:36.660636 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.661506 kubelet[2688]: E1212 18:34:36.661480 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.661506 kubelet[2688]: W1212 18:34:36.661501 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.662253 kubelet[2688]: E1212 18:34:36.661521 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.662253 kubelet[2688]: E1212 18:34:36.662242 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.662323 kubelet[2688]: W1212 18:34:36.662255 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.662323 kubelet[2688]: E1212 18:34:36.662272 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.662655 kubelet[2688]: E1212 18:34:36.662629 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.662655 kubelet[2688]: W1212 18:34:36.662647 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.663057 kubelet[2688]: E1212 18:34:36.662659 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.663057 kubelet[2688]: E1212 18:34:36.662934 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.663057 kubelet[2688]: W1212 18:34:36.662945 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.663057 kubelet[2688]: E1212 18:34:36.662959 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.663500 kubelet[2688]: E1212 18:34:36.663187 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.663500 kubelet[2688]: W1212 18:34:36.663198 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.663500 kubelet[2688]: E1212 18:34:36.663209 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.664017 kubelet[2688]: E1212 18:34:36.663608 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.664017 kubelet[2688]: W1212 18:34:36.663632 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.664017 kubelet[2688]: E1212 18:34:36.663647 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.664017 kubelet[2688]: E1212 18:34:36.664005 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.664017 kubelet[2688]: W1212 18:34:36.664016 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.664017 kubelet[2688]: E1212 18:34:36.664030 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.665046 kubelet[2688]: E1212 18:34:36.665000 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.665046 kubelet[2688]: W1212 18:34:36.665018 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.665046 kubelet[2688]: E1212 18:34:36.665034 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.669712 kubelet[2688]: E1212 18:34:36.669627 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.669712 kubelet[2688]: W1212 18:34:36.669654 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.669712 kubelet[2688]: E1212 18:34:36.669706 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.670437 kubelet[2688]: E1212 18:34:36.670102 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.670437 kubelet[2688]: W1212 18:34:36.670118 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.670437 kubelet[2688]: E1212 18:34:36.670136 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.670437 kubelet[2688]: E1212 18:34:36.670381 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.670437 kubelet[2688]: W1212 18:34:36.670389 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.670437 kubelet[2688]: E1212 18:34:36.670398 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.670669 kubelet[2688]: E1212 18:34:36.670620 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.670669 kubelet[2688]: W1212 18:34:36.670632 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.670669 kubelet[2688]: E1212 18:34:36.670647 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.671052 kubelet[2688]: E1212 18:34:36.670845 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.671052 kubelet[2688]: W1212 18:34:36.670857 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.671052 kubelet[2688]: E1212 18:34:36.670865 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.671484 kubelet[2688]: E1212 18:34:36.671203 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.671484 kubelet[2688]: W1212 18:34:36.671316 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.671484 kubelet[2688]: E1212 18:34:36.671330 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.672856 kubelet[2688]: E1212 18:34:36.672743 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.672856 kubelet[2688]: W1212 18:34:36.672762 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.672856 kubelet[2688]: E1212 18:34:36.672775 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.673472 kubelet[2688]: E1212 18:34:36.673447 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.673472 kubelet[2688]: W1212 18:34:36.673465 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.673672 kubelet[2688]: E1212 18:34:36.673481 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.676834 kubelet[2688]: E1212 18:34:36.676795 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.676834 kubelet[2688]: W1212 18:34:36.676818 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.676834 kubelet[2688]: E1212 18:34:36.676841 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.677735 kubelet[2688]: E1212 18:34:36.677572 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.677735 kubelet[2688]: W1212 18:34:36.677589 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.677735 kubelet[2688]: E1212 18:34:36.677607 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.678249 kubelet[2688]: E1212 18:34:36.678171 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.678249 kubelet[2688]: W1212 18:34:36.678187 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.678249 kubelet[2688]: E1212 18:34:36.678201 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.678774 kubelet[2688]: E1212 18:34:36.678706 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.678774 kubelet[2688]: W1212 18:34:36.678743 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.678774 kubelet[2688]: E1212 18:34:36.678760 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.679121 kubelet[2688]: E1212 18:34:36.679060 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.679121 kubelet[2688]: W1212 18:34:36.679116 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.679506 kubelet[2688]: E1212 18:34:36.679129 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.680151 kubelet[2688]: E1212 18:34:36.680089 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.680151 kubelet[2688]: W1212 18:34:36.680104 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.680151 kubelet[2688]: E1212 18:34:36.680125 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.698757 containerd[1494]: time="2025-12-12T18:34:36.698607230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cf87dcff5-6p5rh,Uid:53f953a7-f878-4db6-9b2f-b3a91b78a143,Namespace:calico-system,Attempt:0,} returns sandbox id \"60917a6099a9ca4810da76abd4130c84989d39c1da2573567144f998b26dbe1a\"" Dec 12 18:34:36.700760 kubelet[2688]: E1212 18:34:36.700693 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:36.700760 kubelet[2688]: W1212 18:34:36.700734 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:36.700760 kubelet[2688]: E1212 18:34:36.700768 2688 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:36.701670 kubelet[2688]: E1212 18:34:36.701640 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:37.856883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2279998781.mount: Deactivated successfully. Dec 12 18:34:37.898833 kubelet[2688]: E1212 18:34:37.898745 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2sqb" podUID="c6e78d63-2cda-428b-a981-9d8b48e5f776" Dec 12 18:34:37.968318 containerd[1494]: time="2025-12-12T18:34:37.968199003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:37.969567 containerd[1494]: time="2025-12-12T18:34:37.969316636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Dec 12 18:34:37.971263 containerd[1494]: time="2025-12-12T18:34:37.970916101Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:37.974913 containerd[1494]: time="2025-12-12T18:34:37.974841106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:37.976331 containerd[1494]: time="2025-12-12T18:34:37.976273213Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.344167777s" Dec 12 18:34:37.976533 containerd[1494]: time="2025-12-12T18:34:37.976515982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 12 18:34:37.978958 containerd[1494]: time="2025-12-12T18:34:37.978444311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 18:34:37.984529 containerd[1494]: time="2025-12-12T18:34:37.984479651Z" level=info msg="CreateContainer within sandbox \"570c637b01b4e406915eea5b25291d318f513957a7ea87fffdcb254b49e8983b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 18:34:37.999584 containerd[1494]: time="2025-12-12T18:34:37.999481573Z" level=info msg="Container ff502469f84b81e812327bc8476f1a6b4424f719c592e88f3e413a6db643647b: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:34:38.003946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount413569494.mount: Deactivated successfully. Dec 12 18:34:38.016901 containerd[1494]: time="2025-12-12T18:34:38.016784352Z" level=info msg="CreateContainer within sandbox \"570c637b01b4e406915eea5b25291d318f513957a7ea87fffdcb254b49e8983b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ff502469f84b81e812327bc8476f1a6b4424f719c592e88f3e413a6db643647b\"" Dec 12 18:34:38.018344 containerd[1494]: time="2025-12-12T18:34:38.018267951Z" level=info msg="StartContainer for \"ff502469f84b81e812327bc8476f1a6b4424f719c592e88f3e413a6db643647b\"" Dec 12 18:34:38.021410 containerd[1494]: time="2025-12-12T18:34:38.021336064Z" level=info msg="connecting to shim ff502469f84b81e812327bc8476f1a6b4424f719c592e88f3e413a6db643647b" address="unix:///run/containerd/s/ca374a205cb3b7b74cbbb7b1c5b08391ff107b14bcec42ec03901a6cec5e2a21" protocol=ttrpc version=3 Dec 12 18:34:38.063561 systemd[1]: Started cri-containerd-ff502469f84b81e812327bc8476f1a6b4424f719c592e88f3e413a6db643647b.scope - libcontainer container ff502469f84b81e812327bc8476f1a6b4424f719c592e88f3e413a6db643647b. Dec 12 18:34:38.180948 containerd[1494]: time="2025-12-12T18:34:38.180840049Z" level=info msg="StartContainer for \"ff502469f84b81e812327bc8476f1a6b4424f719c592e88f3e413a6db643647b\" returns successfully" Dec 12 18:34:38.202186 systemd[1]: cri-containerd-ff502469f84b81e812327bc8476f1a6b4424f719c592e88f3e413a6db643647b.scope: Deactivated successfully. Dec 12 18:34:38.239016 containerd[1494]: time="2025-12-12T18:34:38.238960520Z" level=info msg="received container exit event container_id:\"ff502469f84b81e812327bc8476f1a6b4424f719c592e88f3e413a6db643647b\" id:\"ff502469f84b81e812327bc8476f1a6b4424f719c592e88f3e413a6db643647b\" pid:3303 exited_at:{seconds:1765564478 nanos:207136350}" Dec 12 18:34:38.278890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff502469f84b81e812327bc8476f1a6b4424f719c592e88f3e413a6db643647b-rootfs.mount: Deactivated successfully. Dec 12 18:34:39.018126 kubelet[2688]: E1212 18:34:39.018070 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:39.895242 kubelet[2688]: E1212 18:34:39.895082 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2sqb" podUID="c6e78d63-2cda-428b-a981-9d8b48e5f776" Dec 12 18:34:40.093888 containerd[1494]: time="2025-12-12T18:34:40.093831021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:40.094753 containerd[1494]: time="2025-12-12T18:34:40.094656174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Dec 12 18:34:40.095571 containerd[1494]: time="2025-12-12T18:34:40.095336789Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:40.097846 containerd[1494]: time="2025-12-12T18:34:40.097806170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:40.098669 containerd[1494]: time="2025-12-12T18:34:40.098629317Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.12014805s" Dec 12 18:34:40.098669 containerd[1494]: time="2025-12-12T18:34:40.098671883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 12 18:34:40.103436 containerd[1494]: time="2025-12-12T18:34:40.100302869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 18:34:40.123317 containerd[1494]: time="2025-12-12T18:34:40.123234800Z" level=info msg="CreateContainer within sandbox \"60917a6099a9ca4810da76abd4130c84989d39c1da2573567144f998b26dbe1a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 18:34:40.157809 containerd[1494]: time="2025-12-12T18:34:40.157667983Z" level=info msg="Container dc48ed74fa8cac668d6b117427f5da45522b2b694ab16dbc118b6a7c31f518fd: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:34:40.159475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1155840263.mount: Deactivated successfully. Dec 12 18:34:40.171089 containerd[1494]: time="2025-12-12T18:34:40.171012699Z" level=info msg="CreateContainer within sandbox \"60917a6099a9ca4810da76abd4130c84989d39c1da2573567144f998b26dbe1a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dc48ed74fa8cac668d6b117427f5da45522b2b694ab16dbc118b6a7c31f518fd\"" Dec 12 18:34:40.172989 containerd[1494]: time="2025-12-12T18:34:40.172495799Z" level=info msg="StartContainer for \"dc48ed74fa8cac668d6b117427f5da45522b2b694ab16dbc118b6a7c31f518fd\"" Dec 12 18:34:40.175378 containerd[1494]: time="2025-12-12T18:34:40.175340961Z" level=info msg="connecting to shim dc48ed74fa8cac668d6b117427f5da45522b2b694ab16dbc118b6a7c31f518fd" address="unix:///run/containerd/s/cd2f35f07f3e8272cb94d311e439c70d9a7efa3b89580081c47e5b4442566d66" protocol=ttrpc version=3 Dec 12 18:34:40.213545 systemd[1]: Started cri-containerd-dc48ed74fa8cac668d6b117427f5da45522b2b694ab16dbc118b6a7c31f518fd.scope - libcontainer container dc48ed74fa8cac668d6b117427f5da45522b2b694ab16dbc118b6a7c31f518fd. Dec 12 18:34:40.306647 containerd[1494]: time="2025-12-12T18:34:40.306607124Z" level=info msg="StartContainer for \"dc48ed74fa8cac668d6b117427f5da45522b2b694ab16dbc118b6a7c31f518fd\" returns successfully" Dec 12 18:34:41.028731 kubelet[2688]: E1212 18:34:41.028679 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:41.062108 kubelet[2688]: I1212 18:34:41.062042 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5cf87dcff5-6p5rh" podStartSLOduration=2.665129997 podStartE2EDuration="6.062017751s" podCreationTimestamp="2025-12-12 18:34:35 +0000 UTC" firstStartedPulling="2025-12-12 18:34:36.702997424 +0000 UTC m=+22.956278533" lastFinishedPulling="2025-12-12 18:34:40.099885177 +0000 UTC m=+26.353166287" observedRunningTime="2025-12-12 18:34:41.047304815 +0000 UTC m=+27.300585933" watchObservedRunningTime="2025-12-12 18:34:41.062017751 +0000 UTC m=+27.315298863" Dec 12 18:34:41.895698 kubelet[2688]: E1212 18:34:41.895587 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2sqb" podUID="c6e78d63-2cda-428b-a981-9d8b48e5f776" Dec 12 18:34:42.031454 kubelet[2688]: E1212 18:34:42.030079 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:43.033717 kubelet[2688]: E1212 18:34:43.033587 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:43.896398 kubelet[2688]: E1212 18:34:43.896253 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2sqb" podUID="c6e78d63-2cda-428b-a981-9d8b48e5f776" Dec 12 18:34:44.167218 containerd[1494]: time="2025-12-12T18:34:44.166807531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:44.168298 containerd[1494]: time="2025-12-12T18:34:44.168003911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 12 18:34:44.168944 containerd[1494]: time="2025-12-12T18:34:44.168893471Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:44.171602 containerd[1494]: time="2025-12-12T18:34:44.171549607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:44.172402 containerd[1494]: time="2025-12-12T18:34:44.172339168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.071986428s" Dec 12 18:34:44.172546 containerd[1494]: time="2025-12-12T18:34:44.172528323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 12 18:34:44.177865 containerd[1494]: time="2025-12-12T18:34:44.177680522Z" level=info msg="CreateContainer within sandbox \"570c637b01b4e406915eea5b25291d318f513957a7ea87fffdcb254b49e8983b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 18:34:44.190763 containerd[1494]: time="2025-12-12T18:34:44.190668997Z" level=info msg="Container 1dbcbdca68a0f0411e5620cddd4a068cf2584d718a986393d60fd07853e404f4: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:34:44.193643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2734711367.mount: Deactivated successfully. Dec 12 18:34:44.201682 containerd[1494]: time="2025-12-12T18:34:44.201584424Z" level=info msg="CreateContainer within sandbox \"570c637b01b4e406915eea5b25291d318f513957a7ea87fffdcb254b49e8983b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1dbcbdca68a0f0411e5620cddd4a068cf2584d718a986393d60fd07853e404f4\"" Dec 12 18:34:44.204040 containerd[1494]: time="2025-12-12T18:34:44.202384630Z" level=info msg="StartContainer for \"1dbcbdca68a0f0411e5620cddd4a068cf2584d718a986393d60fd07853e404f4\"" Dec 12 18:34:44.207304 containerd[1494]: time="2025-12-12T18:34:44.207149927Z" level=info msg="connecting to shim 1dbcbdca68a0f0411e5620cddd4a068cf2584d718a986393d60fd07853e404f4" address="unix:///run/containerd/s/ca374a205cb3b7b74cbbb7b1c5b08391ff107b14bcec42ec03901a6cec5e2a21" protocol=ttrpc version=3 Dec 12 18:34:44.242595 systemd[1]: Started cri-containerd-1dbcbdca68a0f0411e5620cddd4a068cf2584d718a986393d60fd07853e404f4.scope - libcontainer container 1dbcbdca68a0f0411e5620cddd4a068cf2584d718a986393d60fd07853e404f4. Dec 12 18:34:44.339889 containerd[1494]: time="2025-12-12T18:34:44.339840183Z" level=info msg="StartContainer for \"1dbcbdca68a0f0411e5620cddd4a068cf2584d718a986393d60fd07853e404f4\" returns successfully" Dec 12 18:34:45.013808 systemd[1]: cri-containerd-1dbcbdca68a0f0411e5620cddd4a068cf2584d718a986393d60fd07853e404f4.scope: Deactivated successfully. Dec 12 18:34:45.014763 systemd[1]: cri-containerd-1dbcbdca68a0f0411e5620cddd4a068cf2584d718a986393d60fd07853e404f4.scope: Consumed 681ms CPU time, 163.3M memory peak, 14.6M read from disk, 171.3M written to disk. Dec 12 18:34:45.043058 containerd[1494]: time="2025-12-12T18:34:45.042996185Z" level=info msg="received container exit event container_id:\"1dbcbdca68a0f0411e5620cddd4a068cf2584d718a986393d60fd07853e404f4\" id:\"1dbcbdca68a0f0411e5620cddd4a068cf2584d718a986393d60fd07853e404f4\" pid:3406 exited_at:{seconds:1765564485 nanos:15888994}" Dec 12 18:34:45.087347 kubelet[2688]: E1212 18:34:45.087057 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:45.106198 kubelet[2688]: I1212 18:34:45.104668 2688 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 18:34:45.134909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dbcbdca68a0f0411e5620cddd4a068cf2584d718a986393d60fd07853e404f4-rootfs.mount: Deactivated successfully. Dec 12 18:34:45.208232 systemd[1]: Created slice kubepods-besteffort-pode47b8144_038b_48bb_9d02_85c4035c0eac.slice - libcontainer container kubepods-besteffort-pode47b8144_038b_48bb_9d02_85c4035c0eac.slice. Dec 12 18:34:45.229622 kubelet[2688]: I1212 18:34:45.229095 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2zmx\" (UniqueName: \"kubernetes.io/projected/e47b8144-038b-48bb-9d02-85c4035c0eac-kube-api-access-b2zmx\") pod \"calico-kube-controllers-7858cfdf57-zqtcq\" (UID: \"e47b8144-038b-48bb-9d02-85c4035c0eac\") " pod="calico-system/calico-kube-controllers-7858cfdf57-zqtcq" Dec 12 18:34:45.229622 kubelet[2688]: I1212 18:34:45.229174 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e47b8144-038b-48bb-9d02-85c4035c0eac-tigera-ca-bundle\") pod \"calico-kube-controllers-7858cfdf57-zqtcq\" (UID: \"e47b8144-038b-48bb-9d02-85c4035c0eac\") " pod="calico-system/calico-kube-controllers-7858cfdf57-zqtcq" Dec 12 18:34:45.249610 systemd[1]: Created slice kubepods-besteffort-podce5f2337_41ff_4270_94bb_ddbc5378e9ad.slice - libcontainer container kubepods-besteffort-podce5f2337_41ff_4270_94bb_ddbc5378e9ad.slice. Dec 12 18:34:45.264715 systemd[1]: Created slice kubepods-burstable-pod9348a57d_6ad8_4adf_8191_03a10aab4279.slice - libcontainer container kubepods-burstable-pod9348a57d_6ad8_4adf_8191_03a10aab4279.slice. Dec 12 18:34:45.277047 systemd[1]: Created slice kubepods-burstable-podf7b29f1a_552f_4b01_88ba_ba01aad4f2e4.slice - libcontainer container kubepods-burstable-podf7b29f1a_552f_4b01_88ba_ba01aad4f2e4.slice. Dec 12 18:34:45.286740 systemd[1]: Created slice kubepods-besteffort-pod0e14d6f7_4179_48cd_a9f8_ec5f09bb3e29.slice - libcontainer container kubepods-besteffort-pod0e14d6f7_4179_48cd_a9f8_ec5f09bb3e29.slice. Dec 12 18:34:45.297862 systemd[1]: Created slice kubepods-besteffort-pod7961b513_fe6b_4e9c_af45_39f62e7bf7e0.slice - libcontainer container kubepods-besteffort-pod7961b513_fe6b_4e9c_af45_39f62e7bf7e0.slice. Dec 12 18:34:45.307379 systemd[1]: Created slice kubepods-besteffort-podc02cbe9e_1b81_42e6_bc64_5bb369970158.slice - libcontainer container kubepods-besteffort-podc02cbe9e_1b81_42e6_bc64_5bb369970158.slice. Dec 12 18:34:45.331495 kubelet[2688]: I1212 18:34:45.331423 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce5f2337-41ff-4270-94bb-ddbc5378e9ad-whisker-ca-bundle\") pod \"whisker-788f487f5c-j4ttj\" (UID: \"ce5f2337-41ff-4270-94bb-ddbc5378e9ad\") " pod="calico-system/whisker-788f487f5c-j4ttj" Dec 12 18:34:45.331495 kubelet[2688]: I1212 18:34:45.331492 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xshfs\" (UniqueName: \"kubernetes.io/projected/ce5f2337-41ff-4270-94bb-ddbc5378e9ad-kube-api-access-xshfs\") pod \"whisker-788f487f5c-j4ttj\" (UID: \"ce5f2337-41ff-4270-94bb-ddbc5378e9ad\") " pod="calico-system/whisker-788f487f5c-j4ttj" Dec 12 18:34:45.331753 kubelet[2688]: I1212 18:34:45.331515 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c02cbe9e-1b81-42e6-bc64-5bb369970158-goldmane-ca-bundle\") pod \"goldmane-666569f655-g4nz4\" (UID: \"c02cbe9e-1b81-42e6-bc64-5bb369970158\") " pod="calico-system/goldmane-666569f655-g4nz4" Dec 12 18:34:45.331753 kubelet[2688]: I1212 18:34:45.331536 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5fvp\" (UniqueName: \"kubernetes.io/projected/c02cbe9e-1b81-42e6-bc64-5bb369970158-kube-api-access-f5fvp\") pod \"goldmane-666569f655-g4nz4\" (UID: \"c02cbe9e-1b81-42e6-bc64-5bb369970158\") " pod="calico-system/goldmane-666569f655-g4nz4" Dec 12 18:34:45.331753 kubelet[2688]: I1212 18:34:45.331556 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce5f2337-41ff-4270-94bb-ddbc5378e9ad-whisker-backend-key-pair\") pod \"whisker-788f487f5c-j4ttj\" (UID: \"ce5f2337-41ff-4270-94bb-ddbc5378e9ad\") " pod="calico-system/whisker-788f487f5c-j4ttj" Dec 12 18:34:45.331753 kubelet[2688]: I1212 18:34:45.331571 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c02cbe9e-1b81-42e6-bc64-5bb369970158-goldmane-key-pair\") pod \"goldmane-666569f655-g4nz4\" (UID: \"c02cbe9e-1b81-42e6-bc64-5bb369970158\") " pod="calico-system/goldmane-666569f655-g4nz4" Dec 12 18:34:45.331753 kubelet[2688]: I1212 18:34:45.331587 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llqtg\" (UniqueName: \"kubernetes.io/projected/0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29-kube-api-access-llqtg\") pod \"calico-apiserver-6475b48c59-l57rn\" (UID: \"0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29\") " pod="calico-apiserver/calico-apiserver-6475b48c59-l57rn" Dec 12 18:34:45.331889 kubelet[2688]: I1212 18:34:45.331606 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7961b513-fe6b-4e9c-af45-39f62e7bf7e0-calico-apiserver-certs\") pod \"calico-apiserver-6475b48c59-d227n\" (UID: \"7961b513-fe6b-4e9c-af45-39f62e7bf7e0\") " pod="calico-apiserver/calico-apiserver-6475b48c59-d227n" Dec 12 18:34:45.331889 kubelet[2688]: I1212 18:34:45.331628 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4krg\" (UniqueName: \"kubernetes.io/projected/7961b513-fe6b-4e9c-af45-39f62e7bf7e0-kube-api-access-k4krg\") pod \"calico-apiserver-6475b48c59-d227n\" (UID: \"7961b513-fe6b-4e9c-af45-39f62e7bf7e0\") " pod="calico-apiserver/calico-apiserver-6475b48c59-d227n" Dec 12 18:34:45.331889 kubelet[2688]: I1212 18:34:45.331662 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7b29f1a-552f-4b01-88ba-ba01aad4f2e4-config-volume\") pod \"coredns-674b8bbfcf-7c8hp\" (UID: \"f7b29f1a-552f-4b01-88ba-ba01aad4f2e4\") " pod="kube-system/coredns-674b8bbfcf-7c8hp" Dec 12 18:34:45.331889 kubelet[2688]: I1212 18:34:45.331680 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9348a57d-6ad8-4adf-8191-03a10aab4279-config-volume\") pod \"coredns-674b8bbfcf-gd47f\" (UID: \"9348a57d-6ad8-4adf-8191-03a10aab4279\") " pod="kube-system/coredns-674b8bbfcf-gd47f" Dec 12 18:34:45.331889 kubelet[2688]: I1212 18:34:45.331698 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrjb8\" (UniqueName: \"kubernetes.io/projected/f7b29f1a-552f-4b01-88ba-ba01aad4f2e4-kube-api-access-nrjb8\") pod \"coredns-674b8bbfcf-7c8hp\" (UID: \"f7b29f1a-552f-4b01-88ba-ba01aad4f2e4\") " pod="kube-system/coredns-674b8bbfcf-7c8hp" Dec 12 18:34:45.332028 kubelet[2688]: I1212 18:34:45.331714 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29-calico-apiserver-certs\") pod \"calico-apiserver-6475b48c59-l57rn\" (UID: \"0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29\") " pod="calico-apiserver/calico-apiserver-6475b48c59-l57rn" Dec 12 18:34:45.332028 kubelet[2688]: I1212 18:34:45.331746 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c02cbe9e-1b81-42e6-bc64-5bb369970158-config\") pod \"goldmane-666569f655-g4nz4\" (UID: \"c02cbe9e-1b81-42e6-bc64-5bb369970158\") " pod="calico-system/goldmane-666569f655-g4nz4" Dec 12 18:34:45.332028 kubelet[2688]: I1212 18:34:45.331762 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b4xl\" (UniqueName: \"kubernetes.io/projected/9348a57d-6ad8-4adf-8191-03a10aab4279-kube-api-access-2b4xl\") pod \"coredns-674b8bbfcf-gd47f\" (UID: \"9348a57d-6ad8-4adf-8191-03a10aab4279\") " pod="kube-system/coredns-674b8bbfcf-gd47f" Dec 12 18:34:45.516839 containerd[1494]: time="2025-12-12T18:34:45.515985495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7858cfdf57-zqtcq,Uid:e47b8144-038b-48bb-9d02-85c4035c0eac,Namespace:calico-system,Attempt:0,}" Dec 12 18:34:45.561162 containerd[1494]: time="2025-12-12T18:34:45.560532864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-788f487f5c-j4ttj,Uid:ce5f2337-41ff-4270-94bb-ddbc5378e9ad,Namespace:calico-system,Attempt:0,}" Dec 12 18:34:45.574675 kubelet[2688]: E1212 18:34:45.572441 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:45.579719 containerd[1494]: time="2025-12-12T18:34:45.577102612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gd47f,Uid:9348a57d-6ad8-4adf-8191-03a10aab4279,Namespace:kube-system,Attempt:0,}" Dec 12 18:34:45.583015 kubelet[2688]: E1212 18:34:45.582978 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:45.586029 containerd[1494]: time="2025-12-12T18:34:45.585971855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7c8hp,Uid:f7b29f1a-552f-4b01-88ba-ba01aad4f2e4,Namespace:kube-system,Attempt:0,}" Dec 12 18:34:45.603540 containerd[1494]: time="2025-12-12T18:34:45.602096648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6475b48c59-l57rn,Uid:0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:34:45.617272 containerd[1494]: time="2025-12-12T18:34:45.616084720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6475b48c59-d227n,Uid:7961b513-fe6b-4e9c-af45-39f62e7bf7e0,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:34:45.619658 containerd[1494]: time="2025-12-12T18:34:45.619616213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g4nz4,Uid:c02cbe9e-1b81-42e6-bc64-5bb369970158,Namespace:calico-system,Attempt:0,}" Dec 12 18:34:45.891773 containerd[1494]: time="2025-12-12T18:34:45.891710298Z" level=error msg="Failed to destroy network for sandbox \"6980f8af54afb22464a6f4c4bc3b871be171ec60be907f5fa3ca5422405a4c8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:45.903592 containerd[1494]: time="2025-12-12T18:34:45.903467115Z" level=error msg="Failed to destroy network for sandbox \"063dd7d17befc5089b56624f00cdbd5825e20e6c5e24fedc9c5c0fdb438dfa59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:45.908615 systemd[1]: Created slice kubepods-besteffort-podc6e78d63_2cda_428b_a981_9d8b48e5f776.slice - libcontainer container kubepods-besteffort-podc6e78d63_2cda_428b_a981_9d8b48e5f776.slice. Dec 12 18:34:45.910705 containerd[1494]: time="2025-12-12T18:34:45.910655220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7858cfdf57-zqtcq,Uid:e47b8144-038b-48bb-9d02-85c4035c0eac,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"063dd7d17befc5089b56624f00cdbd5825e20e6c5e24fedc9c5c0fdb438dfa59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:45.928821 containerd[1494]: time="2025-12-12T18:34:45.928771518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g2sqb,Uid:c6e78d63-2cda-428b-a981-9d8b48e5f776,Namespace:calico-system,Attempt:0,}" Dec 12 18:34:45.930409 containerd[1494]: time="2025-12-12T18:34:45.930372177Z" level=error msg="Failed to destroy network for sandbox \"211166b91a444a693e9f2b8caf8a5fe0e3fbb1cc65b3b959cd1b84b8a9067e8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:45.930543 kubelet[2688]: E1212 18:34:45.930484 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"063dd7d17befc5089b56624f00cdbd5825e20e6c5e24fedc9c5c0fdb438dfa59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:45.930628 kubelet[2688]: E1212 18:34:45.930600 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"063dd7d17befc5089b56624f00cdbd5825e20e6c5e24fedc9c5c0fdb438dfa59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7858cfdf57-zqtcq" Dec 12 18:34:45.930661 kubelet[2688]: E1212 18:34:45.930629 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"063dd7d17befc5089b56624f00cdbd5825e20e6c5e24fedc9c5c0fdb438dfa59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7858cfdf57-zqtcq" Dec 12 18:34:45.932172 containerd[1494]: time="2025-12-12T18:34:45.932051415Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6475b48c59-l57rn,Uid:0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"211166b91a444a693e9f2b8caf8a5fe0e3fbb1cc65b3b959cd1b84b8a9067e8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:45.933302 kubelet[2688]: E1212 18:34:45.933147 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7858cfdf57-zqtcq_calico-system(e47b8144-038b-48bb-9d02-85c4035c0eac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7858cfdf57-zqtcq_calico-system(e47b8144-038b-48bb-9d02-85c4035c0eac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"063dd7d17befc5089b56624f00cdbd5825e20e6c5e24fedc9c5c0fdb438dfa59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7858cfdf57-zqtcq" podUID="e47b8144-038b-48bb-9d02-85c4035c0eac" Dec 12 18:34:45.953538 kubelet[2688]: E1212 18:34:45.953471 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"211166b91a444a693e9f2b8caf8a5fe0e3fbb1cc65b3b959cd1b84b8a9067e8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:45.953701 kubelet[2688]: E1212 18:34:45.953567 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"211166b91a444a693e9f2b8caf8a5fe0e3fbb1cc65b3b959cd1b84b8a9067e8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6475b48c59-l57rn" Dec 12 18:34:45.953701 kubelet[2688]: E1212 18:34:45.953595 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"211166b91a444a693e9f2b8caf8a5fe0e3fbb1cc65b3b959cd1b84b8a9067e8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6475b48c59-l57rn" Dec 12 18:34:45.953701 kubelet[2688]: E1212 18:34:45.953664 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6475b48c59-l57rn_calico-apiserver(0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6475b48c59-l57rn_calico-apiserver(0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"211166b91a444a693e9f2b8caf8a5fe0e3fbb1cc65b3b959cd1b84b8a9067e8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6475b48c59-l57rn" podUID="0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29" Dec 12 18:34:45.989615 containerd[1494]: time="2025-12-12T18:34:45.989547878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-788f487f5c-j4ttj,Uid:ce5f2337-41ff-4270-94bb-ddbc5378e9ad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6980f8af54afb22464a6f4c4bc3b871be171ec60be907f5fa3ca5422405a4c8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:45.990251 kubelet[2688]: E1212 18:34:45.990170 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6980f8af54afb22464a6f4c4bc3b871be171ec60be907f5fa3ca5422405a4c8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:45.990344 kubelet[2688]: E1212 18:34:45.990282 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6980f8af54afb22464a6f4c4bc3b871be171ec60be907f5fa3ca5422405a4c8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-788f487f5c-j4ttj" Dec 12 18:34:45.990344 kubelet[2688]: E1212 18:34:45.990315 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6980f8af54afb22464a6f4c4bc3b871be171ec60be907f5fa3ca5422405a4c8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-788f487f5c-j4ttj" Dec 12 18:34:45.990786 kubelet[2688]: E1212 18:34:45.990460 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-788f487f5c-j4ttj_calico-system(ce5f2337-41ff-4270-94bb-ddbc5378e9ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-788f487f5c-j4ttj_calico-system(ce5f2337-41ff-4270-94bb-ddbc5378e9ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6980f8af54afb22464a6f4c4bc3b871be171ec60be907f5fa3ca5422405a4c8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-788f487f5c-j4ttj" podUID="ce5f2337-41ff-4270-94bb-ddbc5378e9ad" Dec 12 18:34:45.992809 containerd[1494]: time="2025-12-12T18:34:45.992647814Z" level=error msg="Failed to destroy network for sandbox \"448d84e5223dfc082bc4bcf80e9463913c6e6a2e617e3f3cdb23edee64be25c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:45.996633 containerd[1494]: time="2025-12-12T18:34:45.996563046Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6475b48c59-d227n,Uid:7961b513-fe6b-4e9c-af45-39f62e7bf7e0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"448d84e5223dfc082bc4bcf80e9463913c6e6a2e617e3f3cdb23edee64be25c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:45.997408 kubelet[2688]: E1212 18:34:45.996886 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"448d84e5223dfc082bc4bcf80e9463913c6e6a2e617e3f3cdb23edee64be25c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:45.997408 kubelet[2688]: E1212 18:34:45.996958 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"448d84e5223dfc082bc4bcf80e9463913c6e6a2e617e3f3cdb23edee64be25c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6475b48c59-d227n" Dec 12 18:34:45.997408 kubelet[2688]: E1212 18:34:45.996978 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"448d84e5223dfc082bc4bcf80e9463913c6e6a2e617e3f3cdb23edee64be25c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6475b48c59-d227n" Dec 12 18:34:45.998320 kubelet[2688]: E1212 18:34:45.997041 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6475b48c59-d227n_calico-apiserver(7961b513-fe6b-4e9c-af45-39f62e7bf7e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6475b48c59-d227n_calico-apiserver(7961b513-fe6b-4e9c-af45-39f62e7bf7e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"448d84e5223dfc082bc4bcf80e9463913c6e6a2e617e3f3cdb23edee64be25c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6475b48c59-d227n" podUID="7961b513-fe6b-4e9c-af45-39f62e7bf7e0" Dec 12 18:34:46.001016 containerd[1494]: time="2025-12-12T18:34:46.000533986Z" level=error msg="Failed to destroy network for sandbox \"6f0f5e692a2478f8a5cdcaeb223f980754b3dd1befeed8b9674a3223d9f3c2db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:46.002864 containerd[1494]: time="2025-12-12T18:34:46.001858891Z" level=error msg="Failed to destroy network for sandbox \"86f036d5548a0a54d3f7fb875385e5cd8478549710e0f064e900df8f361adec1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:46.002864 containerd[1494]: time="2025-12-12T18:34:46.002822586Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gd47f,Uid:9348a57d-6ad8-4adf-8191-03a10aab4279,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f0f5e692a2478f8a5cdcaeb223f980754b3dd1befeed8b9674a3223d9f3c2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:46.003332 containerd[1494]: time="2025-12-12T18:34:46.003150997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7c8hp,Uid:f7b29f1a-552f-4b01-88ba-ba01aad4f2e4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"86f036d5548a0a54d3f7fb875385e5cd8478549710e0f064e900df8f361adec1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:46.004364 kubelet[2688]: E1212 18:34:46.004306 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f0f5e692a2478f8a5cdcaeb223f980754b3dd1befeed8b9674a3223d9f3c2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:46.004496 kubelet[2688]: E1212 18:34:46.004394 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f0f5e692a2478f8a5cdcaeb223f980754b3dd1befeed8b9674a3223d9f3c2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gd47f" Dec 12 18:34:46.004496 kubelet[2688]: E1212 18:34:46.004423 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f0f5e692a2478f8a5cdcaeb223f980754b3dd1befeed8b9674a3223d9f3c2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gd47f" Dec 12 18:34:46.004594 kubelet[2688]: E1212 18:34:46.004487 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gd47f_kube-system(9348a57d-6ad8-4adf-8191-03a10aab4279)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gd47f_kube-system(9348a57d-6ad8-4adf-8191-03a10aab4279)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f0f5e692a2478f8a5cdcaeb223f980754b3dd1befeed8b9674a3223d9f3c2db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gd47f" podUID="9348a57d-6ad8-4adf-8191-03a10aab4279" Dec 12 18:34:46.004594 kubelet[2688]: E1212 18:34:46.004579 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86f036d5548a0a54d3f7fb875385e5cd8478549710e0f064e900df8f361adec1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:46.004713 kubelet[2688]: E1212 18:34:46.004617 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86f036d5548a0a54d3f7fb875385e5cd8478549710e0f064e900df8f361adec1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7c8hp" Dec 12 18:34:46.004713 kubelet[2688]: E1212 18:34:46.004641 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86f036d5548a0a54d3f7fb875385e5cd8478549710e0f064e900df8f361adec1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7c8hp" Dec 12 18:34:46.005499 kubelet[2688]: E1212 18:34:46.004760 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7c8hp_kube-system(f7b29f1a-552f-4b01-88ba-ba01aad4f2e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7c8hp_kube-system(f7b29f1a-552f-4b01-88ba-ba01aad4f2e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86f036d5548a0a54d3f7fb875385e5cd8478549710e0f064e900df8f361adec1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7c8hp" podUID="f7b29f1a-552f-4b01-88ba-ba01aad4f2e4" Dec 12 18:34:46.027243 containerd[1494]: time="2025-12-12T18:34:46.027129315Z" level=error msg="Failed to destroy network for sandbox \"c5defe47a92e21044cd027408325d29aa414e9bf24e63ab2459a54d87389d300\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:46.028936 containerd[1494]: time="2025-12-12T18:34:46.028840225Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g4nz4,Uid:c02cbe9e-1b81-42e6-bc64-5bb369970158,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5defe47a92e21044cd027408325d29aa414e9bf24e63ab2459a54d87389d300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:46.029605 kubelet[2688]: E1212 18:34:46.029541 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5defe47a92e21044cd027408325d29aa414e9bf24e63ab2459a54d87389d300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:46.031275 kubelet[2688]: E1212 18:34:46.030309 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5defe47a92e21044cd027408325d29aa414e9bf24e63ab2459a54d87389d300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g4nz4" Dec 12 18:34:46.031275 kubelet[2688]: E1212 18:34:46.030368 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5defe47a92e21044cd027408325d29aa414e9bf24e63ab2459a54d87389d300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g4nz4" Dec 12 18:34:46.031275 kubelet[2688]: E1212 18:34:46.030432 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-g4nz4_calico-system(c02cbe9e-1b81-42e6-bc64-5bb369970158)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-g4nz4_calico-system(c02cbe9e-1b81-42e6-bc64-5bb369970158)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5defe47a92e21044cd027408325d29aa414e9bf24e63ab2459a54d87389d300\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-g4nz4" podUID="c02cbe9e-1b81-42e6-bc64-5bb369970158" Dec 12 18:34:46.054832 containerd[1494]: time="2025-12-12T18:34:46.054754788Z" level=error msg="Failed to destroy network for sandbox \"dc49b16e9b03c222ed85a90be6575b750cf9451f11c5b70efa15667390f18a89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:46.056923 containerd[1494]: time="2025-12-12T18:34:46.056827208Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g2sqb,Uid:c6e78d63-2cda-428b-a981-9d8b48e5f776,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc49b16e9b03c222ed85a90be6575b750cf9451f11c5b70efa15667390f18a89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:46.057674 kubelet[2688]: E1212 18:34:46.057621 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc49b16e9b03c222ed85a90be6575b750cf9451f11c5b70efa15667390f18a89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:34:46.057954 kubelet[2688]: E1212 18:34:46.057849 2688 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc49b16e9b03c222ed85a90be6575b750cf9451f11c5b70efa15667390f18a89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g2sqb" Dec 12 18:34:46.057954 kubelet[2688]: E1212 18:34:46.057904 2688 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc49b16e9b03c222ed85a90be6575b750cf9451f11c5b70efa15667390f18a89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g2sqb" Dec 12 18:34:46.058334 kubelet[2688]: E1212 18:34:46.058158 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g2sqb_calico-system(c6e78d63-2cda-428b-a981-9d8b48e5f776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g2sqb_calico-system(c6e78d63-2cda-428b-a981-9d8b48e5f776)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc49b16e9b03c222ed85a90be6575b750cf9451f11c5b70efa15667390f18a89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g2sqb" podUID="c6e78d63-2cda-428b-a981-9d8b48e5f776" Dec 12 18:34:46.093319 kubelet[2688]: E1212 18:34:46.093279 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:46.097141 containerd[1494]: time="2025-12-12T18:34:46.097043453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 18:34:53.474900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2557022123.mount: Deactivated successfully. Dec 12 18:34:53.502298 containerd[1494]: time="2025-12-12T18:34:53.502193505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:53.503622 containerd[1494]: time="2025-12-12T18:34:53.503576595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 12 18:34:53.504479 containerd[1494]: time="2025-12-12T18:34:53.504437003Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:53.507263 containerd[1494]: time="2025-12-12T18:34:53.507193464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:53.507965 containerd[1494]: time="2025-12-12T18:34:53.507920715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.410297917s" Dec 12 18:34:53.507965 containerd[1494]: time="2025-12-12T18:34:53.507961258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 12 18:34:53.552693 containerd[1494]: time="2025-12-12T18:34:53.552630993Z" level=info msg="CreateContainer within sandbox \"570c637b01b4e406915eea5b25291d318f513957a7ea87fffdcb254b49e8983b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 18:34:53.567723 containerd[1494]: time="2025-12-12T18:34:53.566409641Z" level=info msg="Container 3656dbd537147a8b34dba42e05813bee67360ca391e7c691e34841c51918bc23: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:34:53.571044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2647246235.mount: Deactivated successfully. Dec 12 18:34:53.633216 containerd[1494]: time="2025-12-12T18:34:53.633147000Z" level=info msg="CreateContainer within sandbox \"570c637b01b4e406915eea5b25291d318f513957a7ea87fffdcb254b49e8983b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3656dbd537147a8b34dba42e05813bee67360ca391e7c691e34841c51918bc23\"" Dec 12 18:34:53.635290 containerd[1494]: time="2025-12-12T18:34:53.635026451Z" level=info msg="StartContainer for \"3656dbd537147a8b34dba42e05813bee67360ca391e7c691e34841c51918bc23\"" Dec 12 18:34:53.640002 containerd[1494]: time="2025-12-12T18:34:53.639939633Z" level=info msg="connecting to shim 3656dbd537147a8b34dba42e05813bee67360ca391e7c691e34841c51918bc23" address="unix:///run/containerd/s/ca374a205cb3b7b74cbbb7b1c5b08391ff107b14bcec42ec03901a6cec5e2a21" protocol=ttrpc version=3 Dec 12 18:34:53.824436 systemd[1]: Started cri-containerd-3656dbd537147a8b34dba42e05813bee67360ca391e7c691e34841c51918bc23.scope - libcontainer container 3656dbd537147a8b34dba42e05813bee67360ca391e7c691e34841c51918bc23. Dec 12 18:34:53.923520 containerd[1494]: time="2025-12-12T18:34:53.923470434Z" level=info msg="StartContainer for \"3656dbd537147a8b34dba42e05813bee67360ca391e7c691e34841c51918bc23\" returns successfully" Dec 12 18:34:54.121420 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 18:34:54.122480 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 18:34:54.197118 kubelet[2688]: E1212 18:34:54.196216 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:54.228235 kubelet[2688]: I1212 18:34:54.226695 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l2gt7" podStartSLOduration=1.348421912 podStartE2EDuration="18.226570458s" podCreationTimestamp="2025-12-12 18:34:36 +0000 UTC" firstStartedPulling="2025-12-12 18:34:36.631043575 +0000 UTC m=+22.884324671" lastFinishedPulling="2025-12-12 18:34:53.509192102 +0000 UTC m=+39.762473217" observedRunningTime="2025-12-12 18:34:54.222466481 +0000 UTC m=+40.475747597" watchObservedRunningTime="2025-12-12 18:34:54.226570458 +0000 UTC m=+40.479851575" Dec 12 18:34:54.509329 kubelet[2688]: I1212 18:34:54.508992 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce5f2337-41ff-4270-94bb-ddbc5378e9ad-whisker-ca-bundle\") pod \"ce5f2337-41ff-4270-94bb-ddbc5378e9ad\" (UID: \"ce5f2337-41ff-4270-94bb-ddbc5378e9ad\") " Dec 12 18:34:54.509329 kubelet[2688]: I1212 18:34:54.509264 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xshfs\" (UniqueName: \"kubernetes.io/projected/ce5f2337-41ff-4270-94bb-ddbc5378e9ad-kube-api-access-xshfs\") pod \"ce5f2337-41ff-4270-94bb-ddbc5378e9ad\" (UID: \"ce5f2337-41ff-4270-94bb-ddbc5378e9ad\") " Dec 12 18:34:54.510386 kubelet[2688]: I1212 18:34:54.509299 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce5f2337-41ff-4270-94bb-ddbc5378e9ad-whisker-backend-key-pair\") pod \"ce5f2337-41ff-4270-94bb-ddbc5378e9ad\" (UID: \"ce5f2337-41ff-4270-94bb-ddbc5378e9ad\") " Dec 12 18:34:54.510386 kubelet[2688]: I1212 18:34:54.510313 2688 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce5f2337-41ff-4270-94bb-ddbc5378e9ad-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ce5f2337-41ff-4270-94bb-ddbc5378e9ad" (UID: "ce5f2337-41ff-4270-94bb-ddbc5378e9ad"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:34:54.521583 systemd[1]: var-lib-kubelet-pods-ce5f2337\x2d41ff\x2d4270\x2d94bb\x2dddbc5378e9ad-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 18:34:54.522232 kubelet[2688]: I1212 18:34:54.521772 2688 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5f2337-41ff-4270-94bb-ddbc5378e9ad-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ce5f2337-41ff-4270-94bb-ddbc5378e9ad" (UID: "ce5f2337-41ff-4270-94bb-ddbc5378e9ad"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 18:34:54.527487 kubelet[2688]: I1212 18:34:54.527439 2688 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce5f2337-41ff-4270-94bb-ddbc5378e9ad-kube-api-access-xshfs" (OuterVolumeSpecName: "kube-api-access-xshfs") pod "ce5f2337-41ff-4270-94bb-ddbc5378e9ad" (UID: "ce5f2337-41ff-4270-94bb-ddbc5378e9ad"). InnerVolumeSpecName "kube-api-access-xshfs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:34:54.528976 systemd[1]: var-lib-kubelet-pods-ce5f2337\x2d41ff\x2d4270\x2d94bb\x2dddbc5378e9ad-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxshfs.mount: Deactivated successfully. Dec 12 18:34:54.611503 kubelet[2688]: I1212 18:34:54.611428 2688 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce5f2337-41ff-4270-94bb-ddbc5378e9ad-whisker-ca-bundle\") on node \"ci-4459.2.2-8-48b4194eb4\" DevicePath \"\"" Dec 12 18:34:54.611503 kubelet[2688]: I1212 18:34:54.611464 2688 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xshfs\" (UniqueName: \"kubernetes.io/projected/ce5f2337-41ff-4270-94bb-ddbc5378e9ad-kube-api-access-xshfs\") on node \"ci-4459.2.2-8-48b4194eb4\" DevicePath \"\"" Dec 12 18:34:54.611503 kubelet[2688]: I1212 18:34:54.611477 2688 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce5f2337-41ff-4270-94bb-ddbc5378e9ad-whisker-backend-key-pair\") on node \"ci-4459.2.2-8-48b4194eb4\" DevicePath \"\"" Dec 12 18:34:55.198392 kubelet[2688]: I1212 18:34:55.198042 2688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:34:55.201258 kubelet[2688]: E1212 18:34:55.199316 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:55.204307 systemd[1]: Removed slice kubepods-besteffort-podce5f2337_41ff_4270_94bb_ddbc5378e9ad.slice - libcontainer container kubepods-besteffort-podce5f2337_41ff_4270_94bb_ddbc5378e9ad.slice. Dec 12 18:34:55.291287 systemd[1]: Created slice kubepods-besteffort-podd054c3a5_425b_4b52_9c15_9b92c6d5d874.slice - libcontainer container kubepods-besteffort-podd054c3a5_425b_4b52_9c15_9b92c6d5d874.slice. Dec 12 18:34:55.315616 kubelet[2688]: I1212 18:34:55.314660 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d054c3a5-425b-4b52-9c15-9b92c6d5d874-whisker-backend-key-pair\") pod \"whisker-66976c6d85-gwdlp\" (UID: \"d054c3a5-425b-4b52-9c15-9b92c6d5d874\") " pod="calico-system/whisker-66976c6d85-gwdlp" Dec 12 18:34:55.315616 kubelet[2688]: I1212 18:34:55.314704 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d82md\" (UniqueName: \"kubernetes.io/projected/d054c3a5-425b-4b52-9c15-9b92c6d5d874-kube-api-access-d82md\") pod \"whisker-66976c6d85-gwdlp\" (UID: \"d054c3a5-425b-4b52-9c15-9b92c6d5d874\") " pod="calico-system/whisker-66976c6d85-gwdlp" Dec 12 18:34:55.315616 kubelet[2688]: I1212 18:34:55.314746 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d054c3a5-425b-4b52-9c15-9b92c6d5d874-whisker-ca-bundle\") pod \"whisker-66976c6d85-gwdlp\" (UID: \"d054c3a5-425b-4b52-9c15-9b92c6d5d874\") " pod="calico-system/whisker-66976c6d85-gwdlp" Dec 12 18:34:55.599052 containerd[1494]: time="2025-12-12T18:34:55.598595059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66976c6d85-gwdlp,Uid:d054c3a5-425b-4b52-9c15-9b92c6d5d874,Namespace:calico-system,Attempt:0,}" Dec 12 18:34:55.902259 kubelet[2688]: I1212 18:34:55.900645 2688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce5f2337-41ff-4270-94bb-ddbc5378e9ad" path="/var/lib/kubelet/pods/ce5f2337-41ff-4270-94bb-ddbc5378e9ad/volumes" Dec 12 18:34:56.017928 systemd-networkd[1428]: cali8eba64fd64d: Link UP Dec 12 18:34:56.018367 systemd-networkd[1428]: cali8eba64fd64d: Gained carrier Dec 12 18:34:56.048404 containerd[1494]: 2025-12-12 18:34:55.661 [INFO][3737] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:34:56.048404 containerd[1494]: 2025-12-12 18:34:55.702 [INFO][3737] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--8--48b4194eb4-k8s-whisker--66976c6d85--gwdlp-eth0 whisker-66976c6d85- calico-system d054c3a5-425b-4b52-9c15-9b92c6d5d874 927 0 2025-12-12 18:34:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66976c6d85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.2-8-48b4194eb4 whisker-66976c6d85-gwdlp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8eba64fd64d [] [] }} ContainerID="c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114" Namespace="calico-system" Pod="whisker-66976c6d85-gwdlp" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-whisker--66976c6d85--gwdlp-" Dec 12 18:34:56.048404 containerd[1494]: 2025-12-12 18:34:55.702 [INFO][3737] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114" Namespace="calico-system" Pod="whisker-66976c6d85-gwdlp" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-whisker--66976c6d85--gwdlp-eth0" Dec 12 18:34:56.048404 containerd[1494]: 2025-12-12 18:34:55.902 [INFO][3783] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114" HandleID="k8s-pod-network.c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114" Workload="ci--4459.2.2--8--48b4194eb4-k8s-whisker--66976c6d85--gwdlp-eth0" Dec 12 18:34:56.048706 containerd[1494]: 2025-12-12 18:34:55.904 [INFO][3783] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114" HandleID="k8s-pod-network.c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114" Workload="ci--4459.2.2--8--48b4194eb4-k8s-whisker--66976c6d85--gwdlp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001023b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-8-48b4194eb4", "pod":"whisker-66976c6d85-gwdlp", "timestamp":"2025-12-12 18:34:55.902737619 +0000 UTC"}, Hostname:"ci-4459.2.2-8-48b4194eb4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:34:56.048706 containerd[1494]: 2025-12-12 18:34:55.904 [INFO][3783] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:34:56.048706 containerd[1494]: 2025-12-12 18:34:55.905 [INFO][3783] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:34:56.048706 containerd[1494]: 2025-12-12 18:34:55.906 [INFO][3783] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-8-48b4194eb4' Dec 12 18:34:56.048706 containerd[1494]: 2025-12-12 18:34:55.929 [INFO][3783] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:56.048706 containerd[1494]: 2025-12-12 18:34:55.944 [INFO][3783] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:56.048706 containerd[1494]: 2025-12-12 18:34:55.951 [INFO][3783] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:56.048706 containerd[1494]: 2025-12-12 18:34:55.955 [INFO][3783] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:56.048706 containerd[1494]: 2025-12-12 18:34:55.959 [INFO][3783] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:56.048975 containerd[1494]: 2025-12-12 18:34:55.960 [INFO][3783] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:56.048975 containerd[1494]: 2025-12-12 18:34:55.963 [INFO][3783] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114 Dec 12 18:34:56.048975 containerd[1494]: 2025-12-12 18:34:55.975 [INFO][3783] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:56.048975 containerd[1494]: 2025-12-12 18:34:55.987 [INFO][3783] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.2.1/26] block=192.168.2.0/26 handle="k8s-pod-network.c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:56.048975 containerd[1494]: 2025-12-12 18:34:55.987 [INFO][3783] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.1/26] handle="k8s-pod-network.c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:56.048975 containerd[1494]: 2025-12-12 18:34:55.987 [INFO][3783] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:34:56.048975 containerd[1494]: 2025-12-12 18:34:55.987 [INFO][3783] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.2.1/26] IPv6=[] ContainerID="c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114" HandleID="k8s-pod-network.c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114" Workload="ci--4459.2.2--8--48b4194eb4-k8s-whisker--66976c6d85--gwdlp-eth0" Dec 12 18:34:56.049168 containerd[1494]: 2025-12-12 18:34:55.993 [INFO][3737] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114" Namespace="calico-system" Pod="whisker-66976c6d85-gwdlp" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-whisker--66976c6d85--gwdlp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--8--48b4194eb4-k8s-whisker--66976c6d85--gwdlp-eth0", GenerateName:"whisker-66976c6d85-", Namespace:"calico-system", SelfLink:"", UID:"d054c3a5-425b-4b52-9c15-9b92c6d5d874", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66976c6d85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-8-48b4194eb4", ContainerID:"", Pod:"whisker-66976c6d85-gwdlp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8eba64fd64d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:34:56.049168 containerd[1494]: 2025-12-12 18:34:55.994 [INFO][3737] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.1/32] ContainerID="c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114" Namespace="calico-system" Pod="whisker-66976c6d85-gwdlp" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-whisker--66976c6d85--gwdlp-eth0" Dec 12 18:34:56.049863 containerd[1494]: 2025-12-12 18:34:55.994 [INFO][3737] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8eba64fd64d ContainerID="c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114" Namespace="calico-system" Pod="whisker-66976c6d85-gwdlp" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-whisker--66976c6d85--gwdlp-eth0" Dec 12 18:34:56.049863 containerd[1494]: 2025-12-12 18:34:56.012 [INFO][3737] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114" Namespace="calico-system" Pod="whisker-66976c6d85-gwdlp" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-whisker--66976c6d85--gwdlp-eth0" Dec 12 18:34:56.049921 containerd[1494]: 2025-12-12 18:34:56.015 [INFO][3737] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114" Namespace="calico-system" Pod="whisker-66976c6d85-gwdlp" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-whisker--66976c6d85--gwdlp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--8--48b4194eb4-k8s-whisker--66976c6d85--gwdlp-eth0", GenerateName:"whisker-66976c6d85-", Namespace:"calico-system", SelfLink:"", UID:"d054c3a5-425b-4b52-9c15-9b92c6d5d874", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66976c6d85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-8-48b4194eb4", ContainerID:"c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114", Pod:"whisker-66976c6d85-gwdlp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8eba64fd64d", MAC:"ea:6e:ff:db:d6:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:34:56.050002 containerd[1494]: 2025-12-12 18:34:56.042 [INFO][3737] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114" Namespace="calico-system" Pod="whisker-66976c6d85-gwdlp" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-whisker--66976c6d85--gwdlp-eth0" Dec 12 18:34:56.113864 containerd[1494]: time="2025-12-12T18:34:56.113282138Z" level=info msg="connecting to shim c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114" address="unix:///run/containerd/s/92231347865d915bfcb682af9f363a1e2893a02d2f800782116eaf2f358a2a54" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:56.160698 systemd[1]: Started cri-containerd-c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114.scope - libcontainer container c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114. Dec 12 18:34:56.330264 containerd[1494]: time="2025-12-12T18:34:56.330175902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66976c6d85-gwdlp,Uid:d054c3a5-425b-4b52-9c15-9b92c6d5d874,Namespace:calico-system,Attempt:0,} returns sandbox id \"c6405bce0a0db1f5a7ce0933e4d6b5516b8bf9e2108403cb6f6fbf7574606114\"" Dec 12 18:34:56.338392 containerd[1494]: time="2025-12-12T18:34:56.338348657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:34:56.678204 containerd[1494]: time="2025-12-12T18:34:56.678004226Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:34:56.679122 containerd[1494]: time="2025-12-12T18:34:56.678936229Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:34:56.679122 containerd[1494]: time="2025-12-12T18:34:56.679038502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:34:56.679360 kubelet[2688]: E1212 18:34:56.679299 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:34:56.680514 kubelet[2688]: E1212 18:34:56.679370 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:34:56.685777 kubelet[2688]: E1212 18:34:56.685613 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dbe0f4e753c849fd967bc2966515448f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d82md,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66976c6d85-gwdlp_calico-system(d054c3a5-425b-4b52-9c15-9b92c6d5d874): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:34:56.688039 containerd[1494]: time="2025-12-12T18:34:56.688004074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:34:56.780827 systemd-networkd[1428]: vxlan.calico: Link UP Dec 12 18:34:56.780836 systemd-networkd[1428]: vxlan.calico: Gained carrier Dec 12 18:34:56.899109 kubelet[2688]: E1212 18:34:56.899063 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:56.901696 containerd[1494]: time="2025-12-12T18:34:56.901641390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6475b48c59-l57rn,Uid:0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:34:56.904591 containerd[1494]: time="2025-12-12T18:34:56.904538862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7858cfdf57-zqtcq,Uid:e47b8144-038b-48bb-9d02-85c4035c0eac,Namespace:calico-system,Attempt:0,}" Dec 12 18:34:56.914060 containerd[1494]: time="2025-12-12T18:34:56.913242114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7c8hp,Uid:f7b29f1a-552f-4b01-88ba-ba01aad4f2e4,Namespace:kube-system,Attempt:0,}" Dec 12 18:34:56.931270 containerd[1494]: time="2025-12-12T18:34:56.930559688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g4nz4,Uid:c02cbe9e-1b81-42e6-bc64-5bb369970158,Namespace:calico-system,Attempt:0,}" Dec 12 18:34:57.054537 containerd[1494]: time="2025-12-12T18:34:57.054465268Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:34:57.061396 containerd[1494]: time="2025-12-12T18:34:57.060500077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:34:57.061396 containerd[1494]: time="2025-12-12T18:34:57.060667775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:34:57.061610 kubelet[2688]: E1212 18:34:57.060881 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:34:57.061610 kubelet[2688]: E1212 18:34:57.060938 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:34:57.061718 kubelet[2688]: E1212 18:34:57.061072 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d82md,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66976c6d85-gwdlp_calico-system(d054c3a5-425b-4b52-9c15-9b92c6d5d874): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:34:57.063561 kubelet[2688]: E1212 18:34:57.063383 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66976c6d85-gwdlp" podUID="d054c3a5-425b-4b52-9c15-9b92c6d5d874" Dec 12 18:34:57.216247 kubelet[2688]: E1212 18:34:57.215737 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66976c6d85-gwdlp" podUID="d054c3a5-425b-4b52-9c15-9b92c6d5d874" Dec 12 18:34:57.351531 systemd-networkd[1428]: caliea821410166: Link UP Dec 12 18:34:57.351744 systemd-networkd[1428]: caliea821410166: Gained carrier Dec 12 18:34:57.380390 containerd[1494]: 2025-12-12 18:34:57.116 [INFO][3983] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--l57rn-eth0 calico-apiserver-6475b48c59- calico-apiserver 0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29 850 0 2025-12-12 18:34:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6475b48c59 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-8-48b4194eb4 calico-apiserver-6475b48c59-l57rn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliea821410166 [] [] }} ContainerID="5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d" Namespace="calico-apiserver" Pod="calico-apiserver-6475b48c59-l57rn" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--l57rn-" Dec 12 18:34:57.380390 containerd[1494]: 2025-12-12 18:34:57.117 [INFO][3983] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d" Namespace="calico-apiserver" Pod="calico-apiserver-6475b48c59-l57rn" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--l57rn-eth0" Dec 12 18:34:57.380390 containerd[1494]: 2025-12-12 18:34:57.252 [INFO][4018] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d" HandleID="k8s-pod-network.5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d" Workload="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--l57rn-eth0" Dec 12 18:34:57.380790 containerd[1494]: 2025-12-12 18:34:57.256 [INFO][4018] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d" HandleID="k8s-pod-network.5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d" Workload="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--l57rn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-8-48b4194eb4", "pod":"calico-apiserver-6475b48c59-l57rn", "timestamp":"2025-12-12 18:34:57.252774046 +0000 UTC"}, Hostname:"ci-4459.2.2-8-48b4194eb4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:34:57.380790 containerd[1494]: 2025-12-12 18:34:57.256 [INFO][4018] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:34:57.380790 containerd[1494]: 2025-12-12 18:34:57.256 [INFO][4018] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:34:57.380790 containerd[1494]: 2025-12-12 18:34:57.256 [INFO][4018] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-8-48b4194eb4' Dec 12 18:34:57.380790 containerd[1494]: 2025-12-12 18:34:57.281 [INFO][4018] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.380790 containerd[1494]: 2025-12-12 18:34:57.294 [INFO][4018] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.380790 containerd[1494]: 2025-12-12 18:34:57.309 [INFO][4018] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.380790 containerd[1494]: 2025-12-12 18:34:57.312 [INFO][4018] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.380790 containerd[1494]: 2025-12-12 18:34:57.315 [INFO][4018] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.381190 containerd[1494]: 2025-12-12 18:34:57.315 [INFO][4018] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.381190 containerd[1494]: 2025-12-12 18:34:57.318 [INFO][4018] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d Dec 12 18:34:57.381190 containerd[1494]: 2025-12-12 18:34:57.325 [INFO][4018] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.381190 containerd[1494]: 2025-12-12 18:34:57.334 [INFO][4018] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.2.2/26] block=192.168.2.0/26 handle="k8s-pod-network.5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.381190 containerd[1494]: 2025-12-12 18:34:57.334 [INFO][4018] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.2/26] handle="k8s-pod-network.5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.381190 containerd[1494]: 2025-12-12 18:34:57.334 [INFO][4018] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:34:57.381190 containerd[1494]: 2025-12-12 18:34:57.334 [INFO][4018] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.2.2/26] IPv6=[] ContainerID="5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d" HandleID="k8s-pod-network.5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d" Workload="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--l57rn-eth0" Dec 12 18:34:57.382676 containerd[1494]: 2025-12-12 18:34:57.343 [INFO][3983] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d" Namespace="calico-apiserver" Pod="calico-apiserver-6475b48c59-l57rn" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--l57rn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--l57rn-eth0", GenerateName:"calico-apiserver-6475b48c59-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6475b48c59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-8-48b4194eb4", ContainerID:"", Pod:"calico-apiserver-6475b48c59-l57rn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliea821410166", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:34:57.382800 containerd[1494]: 2025-12-12 18:34:57.344 [INFO][3983] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.2/32] ContainerID="5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d" Namespace="calico-apiserver" Pod="calico-apiserver-6475b48c59-l57rn" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--l57rn-eth0" Dec 12 18:34:57.382800 containerd[1494]: 2025-12-12 18:34:57.344 [INFO][3983] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea821410166 ContainerID="5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d" Namespace="calico-apiserver" Pod="calico-apiserver-6475b48c59-l57rn" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--l57rn-eth0" Dec 12 18:34:57.382800 containerd[1494]: 2025-12-12 18:34:57.352 [INFO][3983] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d" Namespace="calico-apiserver" Pod="calico-apiserver-6475b48c59-l57rn" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--l57rn-eth0" Dec 12 18:34:57.382938 containerd[1494]: 2025-12-12 18:34:57.354 [INFO][3983] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d" Namespace="calico-apiserver" Pod="calico-apiserver-6475b48c59-l57rn" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--l57rn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--l57rn-eth0", GenerateName:"calico-apiserver-6475b48c59-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6475b48c59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-8-48b4194eb4", ContainerID:"5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d", Pod:"calico-apiserver-6475b48c59-l57rn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliea821410166", MAC:"76:b9:76:22:38:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:34:57.383036 containerd[1494]: 2025-12-12 18:34:57.374 [INFO][3983] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d" Namespace="calico-apiserver" Pod="calico-apiserver-6475b48c59-l57rn" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--l57rn-eth0" Dec 12 18:34:57.500565 containerd[1494]: time="2025-12-12T18:34:57.499062013Z" level=info msg="connecting to shim 5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d" address="unix:///run/containerd/s/75589e47e74ac61d99282ce2ad0570e13baf28f31b8dadce0c69d6528df94a1d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:57.510064 systemd-networkd[1428]: cali90c66e14b70: Link UP Dec 12 18:34:57.510779 systemd-networkd[1428]: cali90c66e14b70: Gained carrier Dec 12 18:34:57.540812 containerd[1494]: 2025-12-12 18:34:57.140 [INFO][3978] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--8--48b4194eb4-k8s-goldmane--666569f655--g4nz4-eth0 goldmane-666569f655- calico-system c02cbe9e-1b81-42e6-bc64-5bb369970158 852 0 2025-12-12 18:34:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.2-8-48b4194eb4 goldmane-666569f655-g4nz4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali90c66e14b70 [] [] }} ContainerID="b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4" Namespace="calico-system" Pod="goldmane-666569f655-g4nz4" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-goldmane--666569f655--g4nz4-" Dec 12 18:34:57.540812 containerd[1494]: 2025-12-12 18:34:57.149 [INFO][3978] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4" Namespace="calico-system" Pod="goldmane-666569f655-g4nz4" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-goldmane--666569f655--g4nz4-eth0" Dec 12 18:34:57.540812 containerd[1494]: 2025-12-12 18:34:57.273 [INFO][4030] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4" HandleID="k8s-pod-network.b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4" Workload="ci--4459.2.2--8--48b4194eb4-k8s-goldmane--666569f655--g4nz4-eth0" Dec 12 18:34:57.541851 containerd[1494]: 2025-12-12 18:34:57.276 [INFO][4030] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4" HandleID="k8s-pod-network.b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4" Workload="ci--4459.2.2--8--48b4194eb4-k8s-goldmane--666569f655--g4nz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003324e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-8-48b4194eb4", "pod":"goldmane-666569f655-g4nz4", "timestamp":"2025-12-12 18:34:57.273390219 +0000 UTC"}, Hostname:"ci-4459.2.2-8-48b4194eb4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:34:57.541851 containerd[1494]: 2025-12-12 18:34:57.277 [INFO][4030] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:34:57.541851 containerd[1494]: 2025-12-12 18:34:57.334 [INFO][4030] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:34:57.541851 containerd[1494]: 2025-12-12 18:34:57.334 [INFO][4030] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-8-48b4194eb4' Dec 12 18:34:57.541851 containerd[1494]: 2025-12-12 18:34:57.381 [INFO][4030] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.541851 containerd[1494]: 2025-12-12 18:34:57.400 [INFO][4030] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.541851 containerd[1494]: 2025-12-12 18:34:57.412 [INFO][4030] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.541851 containerd[1494]: 2025-12-12 18:34:57.419 [INFO][4030] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.541851 containerd[1494]: 2025-12-12 18:34:57.426 [INFO][4030] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.542459 containerd[1494]: 2025-12-12 18:34:57.426 [INFO][4030] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.542459 containerd[1494]: 2025-12-12 18:34:57.435 [INFO][4030] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4 Dec 12 18:34:57.542459 containerd[1494]: 2025-12-12 18:34:57.459 [INFO][4030] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.542459 containerd[1494]: 2025-12-12 18:34:57.475 [INFO][4030] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.2.3/26] block=192.168.2.0/26 handle="k8s-pod-network.b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.542459 containerd[1494]: 2025-12-12 18:34:57.477 [INFO][4030] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.3/26] handle="k8s-pod-network.b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.542459 containerd[1494]: 2025-12-12 18:34:57.478 [INFO][4030] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:34:57.542459 containerd[1494]: 2025-12-12 18:34:57.479 [INFO][4030] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.2.3/26] IPv6=[] ContainerID="b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4" HandleID="k8s-pod-network.b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4" Workload="ci--4459.2.2--8--48b4194eb4-k8s-goldmane--666569f655--g4nz4-eth0" Dec 12 18:34:57.542655 containerd[1494]: 2025-12-12 18:34:57.499 [INFO][3978] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4" Namespace="calico-system" Pod="goldmane-666569f655-g4nz4" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-goldmane--666569f655--g4nz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--8--48b4194eb4-k8s-goldmane--666569f655--g4nz4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c02cbe9e-1b81-42e6-bc64-5bb369970158", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-8-48b4194eb4", ContainerID:"", Pod:"goldmane-666569f655-g4nz4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali90c66e14b70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:34:57.542655 containerd[1494]: 2025-12-12 18:34:57.500 [INFO][3978] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.3/32] ContainerID="b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4" Namespace="calico-system" Pod="goldmane-666569f655-g4nz4" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-goldmane--666569f655--g4nz4-eth0" Dec 12 18:34:57.543342 containerd[1494]: 2025-12-12 18:34:57.501 [INFO][3978] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90c66e14b70 ContainerID="b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4" Namespace="calico-system" Pod="goldmane-666569f655-g4nz4" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-goldmane--666569f655--g4nz4-eth0" Dec 12 18:34:57.543342 containerd[1494]: 2025-12-12 18:34:57.511 [INFO][3978] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4" Namespace="calico-system" Pod="goldmane-666569f655-g4nz4" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-goldmane--666569f655--g4nz4-eth0" Dec 12 18:34:57.543446 containerd[1494]: 2025-12-12 18:34:57.512 [INFO][3978] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4" Namespace="calico-system" Pod="goldmane-666569f655-g4nz4" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-goldmane--666569f655--g4nz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--8--48b4194eb4-k8s-goldmane--666569f655--g4nz4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c02cbe9e-1b81-42e6-bc64-5bb369970158", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-8-48b4194eb4", ContainerID:"b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4", Pod:"goldmane-666569f655-g4nz4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali90c66e14b70", MAC:"52:08:70:8d:b5:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:34:57.543535 containerd[1494]: 2025-12-12 18:34:57.535 [INFO][3978] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4" Namespace="calico-system" Pod="goldmane-666569f655-g4nz4" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-goldmane--666569f655--g4nz4-eth0" Dec 12 18:34:57.567179 systemd[1]: Started cri-containerd-5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d.scope - libcontainer container 5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d. Dec 12 18:34:57.594609 containerd[1494]: time="2025-12-12T18:34:57.594553651Z" level=info msg="connecting to shim b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4" address="unix:///run/containerd/s/34d57a9576fde60b6f3c527311e0d46913f7a86b3a3239eda2e744505b9db2b0" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:57.634201 systemd-networkd[1428]: cali1d3cae20035: Link UP Dec 12 18:34:57.646767 systemd-networkd[1428]: cali1d3cae20035: Gained carrier Dec 12 18:34:57.688509 systemd[1]: Started cri-containerd-b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4.scope - libcontainer container b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4. Dec 12 18:34:57.708470 containerd[1494]: 2025-12-12 18:34:57.127 [INFO][3965] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--8--48b4194eb4-k8s-calico--kube--controllers--7858cfdf57--zqtcq-eth0 calico-kube-controllers-7858cfdf57- calico-system e47b8144-038b-48bb-9d02-85c4035c0eac 843 0 2025-12-12 18:34:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7858cfdf57 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.2-8-48b4194eb4 calico-kube-controllers-7858cfdf57-zqtcq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1d3cae20035 [] [] }} ContainerID="6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018" Namespace="calico-system" Pod="calico-kube-controllers-7858cfdf57-zqtcq" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--kube--controllers--7858cfdf57--zqtcq-" Dec 12 18:34:57.708470 containerd[1494]: 2025-12-12 18:34:57.127 [INFO][3965] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018" Namespace="calico-system" Pod="calico-kube-controllers-7858cfdf57-zqtcq" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--kube--controllers--7858cfdf57--zqtcq-eth0" Dec 12 18:34:57.708470 containerd[1494]: 2025-12-12 18:34:57.277 [INFO][4023] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018" HandleID="k8s-pod-network.6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018" Workload="ci--4459.2.2--8--48b4194eb4-k8s-calico--kube--controllers--7858cfdf57--zqtcq-eth0" Dec 12 18:34:57.710091 containerd[1494]: 2025-12-12 18:34:57.278 [INFO][4023] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018" HandleID="k8s-pod-network.6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018" Workload="ci--4459.2.2--8--48b4194eb4-k8s-calico--kube--controllers--7858cfdf57--zqtcq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ce4f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-8-48b4194eb4", "pod":"calico-kube-controllers-7858cfdf57-zqtcq", "timestamp":"2025-12-12 18:34:57.277795294 +0000 UTC"}, Hostname:"ci-4459.2.2-8-48b4194eb4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:34:57.710091 containerd[1494]: 2025-12-12 18:34:57.278 [INFO][4023] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:34:57.710091 containerd[1494]: 2025-12-12 18:34:57.477 [INFO][4023] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:34:57.710091 containerd[1494]: 2025-12-12 18:34:57.478 [INFO][4023] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-8-48b4194eb4' Dec 12 18:34:57.710091 containerd[1494]: 2025-12-12 18:34:57.507 [INFO][4023] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.710091 containerd[1494]: 2025-12-12 18:34:57.528 [INFO][4023] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.710091 containerd[1494]: 2025-12-12 18:34:57.550 [INFO][4023] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.710091 containerd[1494]: 2025-12-12 18:34:57.556 [INFO][4023] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.710091 containerd[1494]: 2025-12-12 18:34:57.561 [INFO][4023] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.714007 containerd[1494]: 2025-12-12 18:34:57.561 [INFO][4023] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.714007 containerd[1494]: 2025-12-12 18:34:57.566 [INFO][4023] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018 Dec 12 18:34:57.714007 containerd[1494]: 2025-12-12 18:34:57.578 [INFO][4023] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.714007 containerd[1494]: 2025-12-12 18:34:57.597 [INFO][4023] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.2.4/26] block=192.168.2.0/26 handle="k8s-pod-network.6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.714007 containerd[1494]: 2025-12-12 18:34:57.598 [INFO][4023] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.4/26] handle="k8s-pod-network.6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.714007 containerd[1494]: 2025-12-12 18:34:57.598 [INFO][4023] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:34:57.714007 containerd[1494]: 2025-12-12 18:34:57.598 [INFO][4023] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.2.4/26] IPv6=[] ContainerID="6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018" HandleID="k8s-pod-network.6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018" Workload="ci--4459.2.2--8--48b4194eb4-k8s-calico--kube--controllers--7858cfdf57--zqtcq-eth0" Dec 12 18:34:57.715438 containerd[1494]: 2025-12-12 18:34:57.620 [INFO][3965] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018" Namespace="calico-system" Pod="calico-kube-controllers-7858cfdf57-zqtcq" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--kube--controllers--7858cfdf57--zqtcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--8--48b4194eb4-k8s-calico--kube--controllers--7858cfdf57--zqtcq-eth0", GenerateName:"calico-kube-controllers-7858cfdf57-", Namespace:"calico-system", SelfLink:"", UID:"e47b8144-038b-48bb-9d02-85c4035c0eac", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7858cfdf57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-8-48b4194eb4", ContainerID:"", Pod:"calico-kube-controllers-7858cfdf57-zqtcq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1d3cae20035", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:34:57.715537 containerd[1494]: 2025-12-12 18:34:57.622 [INFO][3965] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.4/32] ContainerID="6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018" Namespace="calico-system" Pod="calico-kube-controllers-7858cfdf57-zqtcq" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--kube--controllers--7858cfdf57--zqtcq-eth0" Dec 12 18:34:57.715537 containerd[1494]: 2025-12-12 18:34:57.622 [INFO][3965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d3cae20035 ContainerID="6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018" Namespace="calico-system" Pod="calico-kube-controllers-7858cfdf57-zqtcq" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--kube--controllers--7858cfdf57--zqtcq-eth0" Dec 12 18:34:57.715537 containerd[1494]: 2025-12-12 18:34:57.651 [INFO][3965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018" Namespace="calico-system" Pod="calico-kube-controllers-7858cfdf57-zqtcq" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--kube--controllers--7858cfdf57--zqtcq-eth0" Dec 12 18:34:57.715616 containerd[1494]: 2025-12-12 18:34:57.656 [INFO][3965] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018" Namespace="calico-system" Pod="calico-kube-controllers-7858cfdf57-zqtcq" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--kube--controllers--7858cfdf57--zqtcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--8--48b4194eb4-k8s-calico--kube--controllers--7858cfdf57--zqtcq-eth0", GenerateName:"calico-kube-controllers-7858cfdf57-", Namespace:"calico-system", SelfLink:"", UID:"e47b8144-038b-48bb-9d02-85c4035c0eac", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7858cfdf57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-8-48b4194eb4", ContainerID:"6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018", Pod:"calico-kube-controllers-7858cfdf57-zqtcq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1d3cae20035", MAC:"1e:ab:ed:b6:e7:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:34:57.715672 containerd[1494]: 2025-12-12 18:34:57.682 [INFO][3965] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018" Namespace="calico-system" Pod="calico-kube-controllers-7858cfdf57-zqtcq" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--kube--controllers--7858cfdf57--zqtcq-eth0" Dec 12 18:34:57.777203 systemd-networkd[1428]: cali43fa171589c: Link UP Dec 12 18:34:57.780974 systemd-networkd[1428]: cali43fa171589c: Gained carrier Dec 12 18:34:57.796399 systemd-networkd[1428]: cali8eba64fd64d: Gained IPv6LL Dec 12 18:34:57.828808 containerd[1494]: 2025-12-12 18:34:57.141 [INFO][3977] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--7c8hp-eth0 coredns-674b8bbfcf- kube-system f7b29f1a-552f-4b01-88ba-ba01aad4f2e4 854 0 2025-12-12 18:34:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-8-48b4194eb4 coredns-674b8bbfcf-7c8hp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali43fa171589c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17" Namespace="kube-system" Pod="coredns-674b8bbfcf-7c8hp" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--7c8hp-" Dec 12 18:34:57.828808 containerd[1494]: 2025-12-12 18:34:57.141 [INFO][3977] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17" Namespace="kube-system" Pod="coredns-674b8bbfcf-7c8hp" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--7c8hp-eth0" Dec 12 18:34:57.828808 containerd[1494]: 2025-12-12 18:34:57.295 [INFO][4027] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17" HandleID="k8s-pod-network.818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17" Workload="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--7c8hp-eth0" Dec 12 18:34:57.829059 containerd[1494]: 2025-12-12 18:34:57.295 [INFO][4027] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17" HandleID="k8s-pod-network.818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17" Workload="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--7c8hp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001038b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-8-48b4194eb4", "pod":"coredns-674b8bbfcf-7c8hp", "timestamp":"2025-12-12 18:34:57.295436762 +0000 UTC"}, Hostname:"ci-4459.2.2-8-48b4194eb4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:34:57.829059 containerd[1494]: 2025-12-12 18:34:57.295 [INFO][4027] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:34:57.829059 containerd[1494]: 2025-12-12 18:34:57.598 [INFO][4027] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:34:57.829059 containerd[1494]: 2025-12-12 18:34:57.599 [INFO][4027] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-8-48b4194eb4' Dec 12 18:34:57.829059 containerd[1494]: 2025-12-12 18:34:57.644 [INFO][4027] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.829059 containerd[1494]: 2025-12-12 18:34:57.662 [INFO][4027] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.829059 containerd[1494]: 2025-12-12 18:34:57.681 [INFO][4027] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.829059 containerd[1494]: 2025-12-12 18:34:57.699 [INFO][4027] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.829059 containerd[1494]: 2025-12-12 18:34:57.718 [INFO][4027] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.829327 containerd[1494]: 2025-12-12 18:34:57.718 [INFO][4027] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.829327 containerd[1494]: 2025-12-12 18:34:57.723 [INFO][4027] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17 Dec 12 18:34:57.829327 containerd[1494]: 2025-12-12 18:34:57.741 [INFO][4027] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.829327 containerd[1494]: 2025-12-12 18:34:57.756 [INFO][4027] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.2.5/26] block=192.168.2.0/26 handle="k8s-pod-network.818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.829327 containerd[1494]: 2025-12-12 18:34:57.756 [INFO][4027] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.5/26] handle="k8s-pod-network.818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:57.829327 containerd[1494]: 2025-12-12 18:34:57.756 [INFO][4027] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:34:57.829327 containerd[1494]: 2025-12-12 18:34:57.760 [INFO][4027] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.2.5/26] IPv6=[] ContainerID="818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17" HandleID="k8s-pod-network.818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17" Workload="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--7c8hp-eth0" Dec 12 18:34:57.829516 containerd[1494]: 2025-12-12 18:34:57.772 [INFO][3977] cni-plugin/k8s.go 418: Populated endpoint ContainerID="818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17" Namespace="kube-system" Pod="coredns-674b8bbfcf-7c8hp" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--7c8hp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--7c8hp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f7b29f1a-552f-4b01-88ba-ba01aad4f2e4", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-8-48b4194eb4", ContainerID:"", Pod:"coredns-674b8bbfcf-7c8hp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43fa171589c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:34:57.829516 containerd[1494]: 2025-12-12 18:34:57.773 [INFO][3977] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.5/32] ContainerID="818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17" Namespace="kube-system" Pod="coredns-674b8bbfcf-7c8hp" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--7c8hp-eth0" Dec 12 18:34:57.829516 containerd[1494]: 2025-12-12 18:34:57.773 [INFO][3977] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43fa171589c ContainerID="818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17" Namespace="kube-system" Pod="coredns-674b8bbfcf-7c8hp" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--7c8hp-eth0" Dec 12 18:34:57.829516 containerd[1494]: 2025-12-12 18:34:57.779 [INFO][3977] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17" Namespace="kube-system" Pod="coredns-674b8bbfcf-7c8hp" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--7c8hp-eth0" Dec 12 18:34:57.829516 containerd[1494]: 2025-12-12 18:34:57.782 [INFO][3977] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17" Namespace="kube-system" Pod="coredns-674b8bbfcf-7c8hp" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--7c8hp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--7c8hp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f7b29f1a-552f-4b01-88ba-ba01aad4f2e4", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-8-48b4194eb4", ContainerID:"818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17", Pod:"coredns-674b8bbfcf-7c8hp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43fa171589c", MAC:"a6:e6:0f:65:44:fe", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:34:57.829516 containerd[1494]: 2025-12-12 18:34:57.816 [INFO][3977] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17" Namespace="kube-system" Pod="coredns-674b8bbfcf-7c8hp" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--7c8hp-eth0" Dec 12 18:34:57.900175 kubelet[2688]: E1212 18:34:57.899674 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:57.905164 containerd[1494]: time="2025-12-12T18:34:57.904166672Z" level=info msg="connecting to shim 6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018" address="unix:///run/containerd/s/0cbe8d8d8f954aa3c8f3adfe1cc6008f16f74cca98d8d59d11e57c1bd898b0b6" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:57.916378 containerd[1494]: time="2025-12-12T18:34:57.916208472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gd47f,Uid:9348a57d-6ad8-4adf-8191-03a10aab4279,Namespace:kube-system,Attempt:0,}" Dec 12 18:34:57.928260 containerd[1494]: time="2025-12-12T18:34:57.927563415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g2sqb,Uid:c6e78d63-2cda-428b-a981-9d8b48e5f776,Namespace:calico-system,Attempt:0,}" Dec 12 18:34:57.931260 containerd[1494]: time="2025-12-12T18:34:57.929439132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6475b48c59-d227n,Uid:7961b513-fe6b-4e9c-af45-39f62e7bf7e0,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:34:57.996130 containerd[1494]: time="2025-12-12T18:34:57.994754645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6475b48c59-l57rn,Uid:0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5750b2b53d5891c250177caf160ed06b8cc11d3f8ce4dbeb931c56e2df7f0b9d\"" Dec 12 18:34:58.018774 containerd[1494]: time="2025-12-12T18:34:58.018304788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:34:58.043273 systemd[1]: Started cri-containerd-6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018.scope - libcontainer container 6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018. Dec 12 18:34:58.112758 containerd[1494]: time="2025-12-12T18:34:58.112399053Z" level=info msg="connecting to shim 818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17" address="unix:///run/containerd/s/47feadd2210ac60cbccbc739f7e75ddcce90c67db3be756a4fdda89533ae8f59" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:58.145892 containerd[1494]: time="2025-12-12T18:34:58.145829793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g4nz4,Uid:c02cbe9e-1b81-42e6-bc64-5bb369970158,Namespace:calico-system,Attempt:0,} returns sandbox id \"b1be1f498fc5e6bfd5b2f3f1bfd67c68095c6701ded31b61d4a58db00ed1b2b4\"" Dec 12 18:34:58.224473 systemd[1]: Started cri-containerd-818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17.scope - libcontainer container 818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17. Dec 12 18:34:58.259254 containerd[1494]: time="2025-12-12T18:34:58.258116679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7858cfdf57-zqtcq,Uid:e47b8144-038b-48bb-9d02-85c4035c0eac,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d93c9bf69f135e7f7746f77a2c7e7eebe959c565b38273c3c1454da7086e018\"" Dec 12 18:34:58.263163 kubelet[2688]: E1212 18:34:58.263113 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66976c6d85-gwdlp" podUID="d054c3a5-425b-4b52-9c15-9b92c6d5d874" Dec 12 18:34:58.399439 containerd[1494]: time="2025-12-12T18:34:58.399376790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7c8hp,Uid:f7b29f1a-552f-4b01-88ba-ba01aad4f2e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17\"" Dec 12 18:34:58.403214 kubelet[2688]: E1212 18:34:58.403162 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:58.409253 containerd[1494]: time="2025-12-12T18:34:58.408923145Z" level=info msg="CreateContainer within sandbox \"818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:34:58.442162 containerd[1494]: time="2025-12-12T18:34:58.442102687Z" level=info msg="Container 3f735ed1b0cceb5d848e9077278063ff4a21b4257a7ad91a6e2483ad799d4847: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:34:58.466170 containerd[1494]: time="2025-12-12T18:34:58.466062344Z" level=info msg="CreateContainer within sandbox \"818c367b559d0ac19187fd0bfb2d8a4922c6baf072db326c8d18b5dad4843b17\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3f735ed1b0cceb5d848e9077278063ff4a21b4257a7ad91a6e2483ad799d4847\"" Dec 12 18:34:58.467200 containerd[1494]: time="2025-12-12T18:34:58.467174802Z" level=info msg="StartContainer for \"3f735ed1b0cceb5d848e9077278063ff4a21b4257a7ad91a6e2483ad799d4847\"" Dec 12 18:34:58.476487 containerd[1494]: time="2025-12-12T18:34:58.476293170Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:34:58.476845 containerd[1494]: time="2025-12-12T18:34:58.476816493Z" level=info msg="connecting to shim 3f735ed1b0cceb5d848e9077278063ff4a21b4257a7ad91a6e2483ad799d4847" address="unix:///run/containerd/s/47feadd2210ac60cbccbc739f7e75ddcce90c67db3be756a4fdda89533ae8f59" protocol=ttrpc version=3 Dec 12 18:34:58.477317 containerd[1494]: time="2025-12-12T18:34:58.477283946Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:34:58.478432 containerd[1494]: time="2025-12-12T18:34:58.478254117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:34:58.478505 kubelet[2688]: E1212 18:34:58.478457 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:34:58.478560 kubelet[2688]: E1212 18:34:58.478506 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:34:58.478912 kubelet[2688]: E1212 18:34:58.478745 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llqtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6475b48c59-l57rn_calico-apiserver(0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:34:58.480272 kubelet[2688]: E1212 18:34:58.480109 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6475b48c59-l57rn" podUID="0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29" Dec 12 18:34:58.481485 containerd[1494]: time="2025-12-12T18:34:58.481410290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:34:58.511278 systemd-networkd[1428]: cali7abf89f0327: Link UP Dec 12 18:34:58.522498 systemd-networkd[1428]: cali7abf89f0327: Gained carrier Dec 12 18:34:58.568509 systemd[1]: Started cri-containerd-3f735ed1b0cceb5d848e9077278063ff4a21b4257a7ad91a6e2483ad799d4847.scope - libcontainer container 3f735ed1b0cceb5d848e9077278063ff4a21b4257a7ad91a6e2483ad799d4847. Dec 12 18:34:58.584786 containerd[1494]: 2025-12-12 18:34:58.177 [INFO][4218] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--gd47f-eth0 coredns-674b8bbfcf- kube-system 9348a57d-6ad8-4adf-8191-03a10aab4279 853 0 2025-12-12 18:34:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-8-48b4194eb4 coredns-674b8bbfcf-gd47f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7abf89f0327 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gd47f" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--gd47f-" Dec 12 18:34:58.584786 containerd[1494]: 2025-12-12 18:34:58.178 [INFO][4218] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gd47f" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--gd47f-eth0" Dec 12 18:34:58.584786 containerd[1494]: 2025-12-12 18:34:58.317 [INFO][4319] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3" HandleID="k8s-pod-network.bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3" Workload="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--gd47f-eth0" Dec 12 18:34:58.584786 containerd[1494]: 2025-12-12 18:34:58.318 [INFO][4319] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3" HandleID="k8s-pod-network.bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3" Workload="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--gd47f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325a50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-8-48b4194eb4", "pod":"coredns-674b8bbfcf-gd47f", "timestamp":"2025-12-12 18:34:58.317615617 +0000 UTC"}, Hostname:"ci-4459.2.2-8-48b4194eb4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:34:58.584786 containerd[1494]: 2025-12-12 18:34:58.318 [INFO][4319] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:34:58.584786 containerd[1494]: 2025-12-12 18:34:58.319 [INFO][4319] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:34:58.584786 containerd[1494]: 2025-12-12 18:34:58.319 [INFO][4319] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-8-48b4194eb4' Dec 12 18:34:58.584786 containerd[1494]: 2025-12-12 18:34:58.353 [INFO][4319] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.584786 containerd[1494]: 2025-12-12 18:34:58.382 [INFO][4319] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.584786 containerd[1494]: 2025-12-12 18:34:58.417 [INFO][4319] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.584786 containerd[1494]: 2025-12-12 18:34:58.425 [INFO][4319] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.584786 containerd[1494]: 2025-12-12 18:34:58.429 [INFO][4319] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.584786 containerd[1494]: 2025-12-12 18:34:58.429 [INFO][4319] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.584786 containerd[1494]: 2025-12-12 18:34:58.450 [INFO][4319] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3 Dec 12 18:34:58.584786 containerd[1494]: 2025-12-12 18:34:58.473 [INFO][4319] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.584786 containerd[1494]: 2025-12-12 18:34:58.490 [INFO][4319] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.2.6/26] block=192.168.2.0/26 handle="k8s-pod-network.bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.584786 containerd[1494]: 2025-12-12 18:34:58.491 [INFO][4319] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.6/26] handle="k8s-pod-network.bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.584786 containerd[1494]: 2025-12-12 18:34:58.491 [INFO][4319] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:34:58.584786 containerd[1494]: 2025-12-12 18:34:58.492 [INFO][4319] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.2.6/26] IPv6=[] ContainerID="bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3" HandleID="k8s-pod-network.bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3" Workload="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--gd47f-eth0" Dec 12 18:34:58.587398 containerd[1494]: 2025-12-12 18:34:58.500 [INFO][4218] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gd47f" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--gd47f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--gd47f-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9348a57d-6ad8-4adf-8191-03a10aab4279", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-8-48b4194eb4", ContainerID:"", Pod:"coredns-674b8bbfcf-gd47f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7abf89f0327", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:34:58.587398 containerd[1494]: 2025-12-12 18:34:58.504 [INFO][4218] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.6/32] ContainerID="bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gd47f" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--gd47f-eth0" Dec 12 18:34:58.587398 containerd[1494]: 2025-12-12 18:34:58.504 [INFO][4218] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7abf89f0327 ContainerID="bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gd47f" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--gd47f-eth0" Dec 12 18:34:58.587398 containerd[1494]: 2025-12-12 18:34:58.520 [INFO][4218] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gd47f" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--gd47f-eth0" Dec 12 18:34:58.587398 containerd[1494]: 2025-12-12 18:34:58.535 [INFO][4218] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gd47f" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--gd47f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--gd47f-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9348a57d-6ad8-4adf-8191-03a10aab4279", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-8-48b4194eb4", ContainerID:"bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3", Pod:"coredns-674b8bbfcf-gd47f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7abf89f0327", MAC:"42:2a:10:a4:56:91", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:34:58.587398 containerd[1494]: 2025-12-12 18:34:58.576 [INFO][4218] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gd47f" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-coredns--674b8bbfcf--gd47f-eth0" Dec 12 18:34:58.622320 containerd[1494]: time="2025-12-12T18:34:58.621191353Z" level=info msg="connecting to shim bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3" address="unix:///run/containerd/s/d9879e578a51074d9214aa7844919c1e9e429d342fab464ef1a4268610656213" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:58.628443 systemd-networkd[1428]: vxlan.calico: Gained IPv6LL Dec 12 18:34:58.711039 systemd-networkd[1428]: calibde62e6068d: Link UP Dec 12 18:34:58.713402 systemd-networkd[1428]: calibde62e6068d: Gained carrier Dec 12 18:34:58.719492 systemd[1]: Started cri-containerd-bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3.scope - libcontainer container bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3. Dec 12 18:34:58.737972 containerd[1494]: time="2025-12-12T18:34:58.737916977Z" level=info msg="StartContainer for \"3f735ed1b0cceb5d848e9077278063ff4a21b4257a7ad91a6e2483ad799d4847\" returns successfully" Dec 12 18:34:58.751435 containerd[1494]: 2025-12-12 18:34:58.235 [INFO][4238] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--8--48b4194eb4-k8s-csi--node--driver--g2sqb-eth0 csi-node-driver- calico-system c6e78d63-2cda-428b-a981-9d8b48e5f776 731 0 2025-12-12 18:34:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.2-8-48b4194eb4 csi-node-driver-g2sqb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibde62e6068d [] [] }} ContainerID="8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7" Namespace="calico-system" Pod="csi-node-driver-g2sqb" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-csi--node--driver--g2sqb-" Dec 12 18:34:58.751435 containerd[1494]: 2025-12-12 18:34:58.235 [INFO][4238] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7" Namespace="calico-system" Pod="csi-node-driver-g2sqb" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-csi--node--driver--g2sqb-eth0" Dec 12 18:34:58.751435 containerd[1494]: 2025-12-12 18:34:58.353 [INFO][4334] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7" HandleID="k8s-pod-network.8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7" Workload="ci--4459.2.2--8--48b4194eb4-k8s-csi--node--driver--g2sqb-eth0" Dec 12 18:34:58.751435 containerd[1494]: 2025-12-12 18:34:58.354 [INFO][4334] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7" HandleID="k8s-pod-network.8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7" Workload="ci--4459.2.2--8--48b4194eb4-k8s-csi--node--driver--g2sqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-8-48b4194eb4", "pod":"csi-node-driver-g2sqb", "timestamp":"2025-12-12 18:34:58.353157389 +0000 UTC"}, Hostname:"ci-4459.2.2-8-48b4194eb4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:34:58.751435 containerd[1494]: 2025-12-12 18:34:58.354 [INFO][4334] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:34:58.751435 containerd[1494]: 2025-12-12 18:34:58.492 [INFO][4334] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:34:58.751435 containerd[1494]: 2025-12-12 18:34:58.492 [INFO][4334] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-8-48b4194eb4' Dec 12 18:34:58.751435 containerd[1494]: 2025-12-12 18:34:58.546 [INFO][4334] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.751435 containerd[1494]: 2025-12-12 18:34:58.566 [INFO][4334] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.751435 containerd[1494]: 2025-12-12 18:34:58.580 [INFO][4334] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.751435 containerd[1494]: 2025-12-12 18:34:58.588 [INFO][4334] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.751435 containerd[1494]: 2025-12-12 18:34:58.593 [INFO][4334] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.751435 containerd[1494]: 2025-12-12 18:34:58.593 [INFO][4334] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.751435 containerd[1494]: 2025-12-12 18:34:58.598 [INFO][4334] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7 Dec 12 18:34:58.751435 containerd[1494]: 2025-12-12 18:34:58.617 [INFO][4334] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.751435 containerd[1494]: 2025-12-12 18:34:58.642 [INFO][4334] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.2.7/26] block=192.168.2.0/26 handle="k8s-pod-network.8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.751435 containerd[1494]: 2025-12-12 18:34:58.643 [INFO][4334] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.7/26] handle="k8s-pod-network.8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.751435 containerd[1494]: 2025-12-12 18:34:58.644 [INFO][4334] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:34:58.751435 containerd[1494]: 2025-12-12 18:34:58.644 [INFO][4334] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.2.7/26] IPv6=[] ContainerID="8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7" HandleID="k8s-pod-network.8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7" Workload="ci--4459.2.2--8--48b4194eb4-k8s-csi--node--driver--g2sqb-eth0" Dec 12 18:34:58.752444 containerd[1494]: 2025-12-12 18:34:58.657 [INFO][4238] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7" Namespace="calico-system" Pod="csi-node-driver-g2sqb" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-csi--node--driver--g2sqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--8--48b4194eb4-k8s-csi--node--driver--g2sqb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6e78d63-2cda-428b-a981-9d8b48e5f776", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-8-48b4194eb4", ContainerID:"", Pod:"csi-node-driver-g2sqb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibde62e6068d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:34:58.752444 containerd[1494]: 2025-12-12 18:34:58.659 [INFO][4238] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.7/32] ContainerID="8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7" Namespace="calico-system" Pod="csi-node-driver-g2sqb" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-csi--node--driver--g2sqb-eth0" Dec 12 18:34:58.752444 containerd[1494]: 2025-12-12 18:34:58.660 [INFO][4238] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibde62e6068d ContainerID="8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7" Namespace="calico-system" Pod="csi-node-driver-g2sqb" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-csi--node--driver--g2sqb-eth0" Dec 12 18:34:58.752444 containerd[1494]: 2025-12-12 18:34:58.707 [INFO][4238] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7" Namespace="calico-system" Pod="csi-node-driver-g2sqb" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-csi--node--driver--g2sqb-eth0" Dec 12 18:34:58.752444 containerd[1494]: 2025-12-12 18:34:58.716 [INFO][4238] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7" Namespace="calico-system" Pod="csi-node-driver-g2sqb" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-csi--node--driver--g2sqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--8--48b4194eb4-k8s-csi--node--driver--g2sqb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6e78d63-2cda-428b-a981-9d8b48e5f776", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-8-48b4194eb4", ContainerID:"8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7", Pod:"csi-node-driver-g2sqb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibde62e6068d", MAC:"d2:f0:0f:ea:74:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:34:58.752444 containerd[1494]: 2025-12-12 18:34:58.745 [INFO][4238] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7" Namespace="calico-system" Pod="csi-node-driver-g2sqb" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-csi--node--driver--g2sqb-eth0" Dec 12 18:34:58.800926 containerd[1494]: time="2025-12-12T18:34:58.800836141Z" level=info msg="connecting to shim 8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7" address="unix:///run/containerd/s/82043ca135b0a5023fa30b7c1187fb78bcd606d74f6d36915c8c5d90f007da98" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:58.821802 systemd-networkd[1428]: cali90c66e14b70: Gained IPv6LL Dec 12 18:34:58.829831 systemd-networkd[1428]: cali3f93ff6284e: Link UP Dec 12 18:34:58.839453 systemd-networkd[1428]: cali3f93ff6284e: Gained carrier Dec 12 18:34:58.842041 containerd[1494]: time="2025-12-12T18:34:58.841998946Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:34:58.843095 containerd[1494]: time="2025-12-12T18:34:58.842956299Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:34:58.844176 containerd[1494]: time="2025-12-12T18:34:58.843032979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:34:58.844499 kubelet[2688]: E1212 18:34:58.844465 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:34:58.844795 kubelet[2688]: E1212 18:34:58.844613 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:34:58.846706 containerd[1494]: time="2025-12-12T18:34:58.845668335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:34:58.846788 kubelet[2688]: E1212 18:34:58.845669 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f5fvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g4nz4_calico-system(c02cbe9e-1b81-42e6-bc64-5bb369970158): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:34:58.847392 kubelet[2688]: E1212 18:34:58.847336 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g4nz4" podUID="c02cbe9e-1b81-42e6-bc64-5bb369970158" Dec 12 18:34:58.880695 containerd[1494]: 2025-12-12 18:34:58.251 [INFO][4235] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--d227n-eth0 calico-apiserver-6475b48c59- calico-apiserver 7961b513-fe6b-4e9c-af45-39f62e7bf7e0 855 0 2025-12-12 18:34:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6475b48c59 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-8-48b4194eb4 calico-apiserver-6475b48c59-d227n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3f93ff6284e [] [] }} ContainerID="9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d" Namespace="calico-apiserver" Pod="calico-apiserver-6475b48c59-d227n" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--d227n-" Dec 12 18:34:58.880695 containerd[1494]: 2025-12-12 18:34:58.251 [INFO][4235] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d" Namespace="calico-apiserver" Pod="calico-apiserver-6475b48c59-d227n" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--d227n-eth0" Dec 12 18:34:58.880695 containerd[1494]: 2025-12-12 18:34:58.392 [INFO][4339] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d" HandleID="k8s-pod-network.9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d" Workload="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--d227n-eth0" Dec 12 18:34:58.880695 containerd[1494]: 2025-12-12 18:34:58.393 [INFO][4339] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d" HandleID="k8s-pod-network.9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d" Workload="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--d227n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000410210), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-8-48b4194eb4", "pod":"calico-apiserver-6475b48c59-d227n", "timestamp":"2025-12-12 18:34:58.392147911 +0000 UTC"}, Hostname:"ci-4459.2.2-8-48b4194eb4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:34:58.880695 containerd[1494]: 2025-12-12 18:34:58.393 [INFO][4339] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:34:58.880695 containerd[1494]: 2025-12-12 18:34:58.644 [INFO][4339] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:34:58.880695 containerd[1494]: 2025-12-12 18:34:58.647 [INFO][4339] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-8-48b4194eb4' Dec 12 18:34:58.880695 containerd[1494]: 2025-12-12 18:34:58.684 [INFO][4339] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.880695 containerd[1494]: 2025-12-12 18:34:58.722 [INFO][4339] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.880695 containerd[1494]: 2025-12-12 18:34:58.739 [INFO][4339] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.880695 containerd[1494]: 2025-12-12 18:34:58.747 [INFO][4339] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.880695 containerd[1494]: 2025-12-12 18:34:58.755 [INFO][4339] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.880695 containerd[1494]: 2025-12-12 18:34:58.755 [INFO][4339] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.880695 containerd[1494]: 2025-12-12 18:34:58.759 [INFO][4339] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d Dec 12 18:34:58.880695 containerd[1494]: 2025-12-12 18:34:58.776 [INFO][4339] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.880695 containerd[1494]: 2025-12-12 18:34:58.793 [INFO][4339] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.2.8/26] block=192.168.2.0/26 handle="k8s-pod-network.9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.880695 containerd[1494]: 2025-12-12 18:34:58.794 [INFO][4339] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.8/26] handle="k8s-pod-network.9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d" host="ci-4459.2.2-8-48b4194eb4" Dec 12 18:34:58.880695 containerd[1494]: 2025-12-12 18:34:58.794 [INFO][4339] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:34:58.880695 containerd[1494]: 2025-12-12 18:34:58.794 [INFO][4339] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.2.8/26] IPv6=[] ContainerID="9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d" HandleID="k8s-pod-network.9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d" Workload="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--d227n-eth0" Dec 12 18:34:58.882554 containerd[1494]: 2025-12-12 18:34:58.811 [INFO][4235] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d" Namespace="calico-apiserver" Pod="calico-apiserver-6475b48c59-d227n" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--d227n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--d227n-eth0", GenerateName:"calico-apiserver-6475b48c59-", Namespace:"calico-apiserver", SelfLink:"", UID:"7961b513-fe6b-4e9c-af45-39f62e7bf7e0", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6475b48c59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-8-48b4194eb4", ContainerID:"", Pod:"calico-apiserver-6475b48c59-d227n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3f93ff6284e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:34:58.882554 containerd[1494]: 2025-12-12 18:34:58.812 [INFO][4235] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.8/32] ContainerID="9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d" Namespace="calico-apiserver" Pod="calico-apiserver-6475b48c59-d227n" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--d227n-eth0" Dec 12 18:34:58.882554 containerd[1494]: 2025-12-12 18:34:58.812 [INFO][4235] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f93ff6284e ContainerID="9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d" Namespace="calico-apiserver" Pod="calico-apiserver-6475b48c59-d227n" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--d227n-eth0" Dec 12 18:34:58.882554 containerd[1494]: 2025-12-12 18:34:58.843 [INFO][4235] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d" Namespace="calico-apiserver" Pod="calico-apiserver-6475b48c59-d227n" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--d227n-eth0" Dec 12 18:34:58.882554 containerd[1494]: 2025-12-12 18:34:58.847 [INFO][4235] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d" Namespace="calico-apiserver" Pod="calico-apiserver-6475b48c59-d227n" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--d227n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--d227n-eth0", GenerateName:"calico-apiserver-6475b48c59-", Namespace:"calico-apiserver", SelfLink:"", UID:"7961b513-fe6b-4e9c-af45-39f62e7bf7e0", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6475b48c59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-8-48b4194eb4", ContainerID:"9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d", Pod:"calico-apiserver-6475b48c59-d227n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3f93ff6284e", MAC:"a2:92:90:64:3a:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:34:58.882554 containerd[1494]: 2025-12-12 18:34:58.872 [INFO][4235] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d" Namespace="calico-apiserver" Pod="calico-apiserver-6475b48c59-d227n" WorkloadEndpoint="ci--4459.2.2--8--48b4194eb4-k8s-calico--apiserver--6475b48c59--d227n-eth0" Dec 12 18:34:58.922577 systemd[1]: Started cri-containerd-8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7.scope - libcontainer container 8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7. Dec 12 18:34:58.950352 systemd-networkd[1428]: cali1d3cae20035: Gained IPv6LL Dec 12 18:34:58.974774 containerd[1494]: time="2025-12-12T18:34:58.974604101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gd47f,Uid:9348a57d-6ad8-4adf-8191-03a10aab4279,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3\"" Dec 12 18:34:58.983081 kubelet[2688]: E1212 18:34:58.982825 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:58.996972 containerd[1494]: time="2025-12-12T18:34:58.996621436Z" level=info msg="connecting to shim 9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d" address="unix:///run/containerd/s/47e0fdfc4ff5d7a825a28221eb11bedf11e25834863042fd1c10e382f36f78f0" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:58.998468 containerd[1494]: time="2025-12-12T18:34:58.998369810Z" level=info msg="CreateContainer within sandbox \"bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:34:59.012482 systemd-networkd[1428]: caliea821410166: Gained IPv6LL Dec 12 18:34:59.038309 containerd[1494]: time="2025-12-12T18:34:59.036811152Z" level=info msg="Container 837b7359e6916d365c11e06660ff4a8a201fbd7ebdb8492f5e6d1d995bb54538: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:34:59.059517 containerd[1494]: time="2025-12-12T18:34:59.058811408Z" level=info msg="CreateContainer within sandbox \"bd5d483de06353ee77bf9f739f0f835b6773e24e233b5c03f89e4bffdd015be3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"837b7359e6916d365c11e06660ff4a8a201fbd7ebdb8492f5e6d1d995bb54538\"" Dec 12 18:34:59.064376 containerd[1494]: time="2025-12-12T18:34:59.064305412Z" level=info msg="StartContainer for \"837b7359e6916d365c11e06660ff4a8a201fbd7ebdb8492f5e6d1d995bb54538\"" Dec 12 18:34:59.072279 containerd[1494]: time="2025-12-12T18:34:59.071471075Z" level=info msg="connecting to shim 837b7359e6916d365c11e06660ff4a8a201fbd7ebdb8492f5e6d1d995bb54538" address="unix:///run/containerd/s/d9879e578a51074d9214aa7844919c1e9e429d342fab464ef1a4268610656213" protocol=ttrpc version=3 Dec 12 18:34:59.092588 systemd[1]: Started cri-containerd-9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d.scope - libcontainer container 9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d. Dec 12 18:34:59.121538 systemd[1]: Started cri-containerd-837b7359e6916d365c11e06660ff4a8a201fbd7ebdb8492f5e6d1d995bb54538.scope - libcontainer container 837b7359e6916d365c11e06660ff4a8a201fbd7ebdb8492f5e6d1d995bb54538. Dec 12 18:34:59.140568 systemd-networkd[1428]: cali43fa171589c: Gained IPv6LL Dec 12 18:34:59.214286 containerd[1494]: time="2025-12-12T18:34:59.213945487Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:34:59.216882 containerd[1494]: time="2025-12-12T18:34:59.216759359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:34:59.218042 containerd[1494]: time="2025-12-12T18:34:59.217934080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:34:59.221261 kubelet[2688]: E1212 18:34:59.219291 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:34:59.221261 kubelet[2688]: E1212 18:34:59.219343 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:34:59.221261 kubelet[2688]: E1212 18:34:59.219499 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2zmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7858cfdf57-zqtcq_calico-system(e47b8144-038b-48bb-9d02-85c4035c0eac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:34:59.221261 kubelet[2688]: E1212 18:34:59.221075 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7858cfdf57-zqtcq" podUID="e47b8144-038b-48bb-9d02-85c4035c0eac" Dec 12 18:34:59.228211 containerd[1494]: time="2025-12-12T18:34:59.227958206Z" level=info msg="StartContainer for \"837b7359e6916d365c11e06660ff4a8a201fbd7ebdb8492f5e6d1d995bb54538\" returns successfully" Dec 12 18:34:59.238697 containerd[1494]: time="2025-12-12T18:34:59.238438423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g2sqb,Uid:c6e78d63-2cda-428b-a981-9d8b48e5f776,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f50c78b61cd11cbdfed96fa9332716ce169d24b03d389c54c7b59144cbcb3e7\"" Dec 12 18:34:59.243169 containerd[1494]: time="2025-12-12T18:34:59.241839707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:34:59.257986 containerd[1494]: time="2025-12-12T18:34:59.257787830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6475b48c59-d227n,Uid:7961b513-fe6b-4e9c-af45-39f62e7bf7e0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9781124ca06399958db1c7853b203b18d9a1eb51c5e6b1116406e96fc8fa4b2d\"" Dec 12 18:34:59.269197 kubelet[2688]: E1212 18:34:59.268810 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:59.272156 kubelet[2688]: E1212 18:34:59.272096 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:34:59.287920 kubelet[2688]: E1212 18:34:59.287872 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g4nz4" podUID="c02cbe9e-1b81-42e6-bc64-5bb369970158" Dec 12 18:34:59.289518 kubelet[2688]: E1212 18:34:59.288206 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7858cfdf57-zqtcq" podUID="e47b8144-038b-48bb-9d02-85c4035c0eac" Dec 12 18:34:59.289518 kubelet[2688]: E1212 18:34:59.289177 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6475b48c59-l57rn" podUID="0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29" Dec 12 18:34:59.318783 kubelet[2688]: I1212 18:34:59.318704 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7c8hp" podStartSLOduration=39.318678295 podStartE2EDuration="39.318678295s" podCreationTimestamp="2025-12-12 18:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:34:59.30060055 +0000 UTC m=+45.553881667" watchObservedRunningTime="2025-12-12 18:34:59.318678295 +0000 UTC m=+45.571959413" Dec 12 18:34:59.338202 kubelet[2688]: I1212 18:34:59.338130 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gd47f" podStartSLOduration=39.338113621 podStartE2EDuration="39.338113621s" podCreationTimestamp="2025-12-12 18:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:34:59.320559777 +0000 UTC m=+45.573840893" watchObservedRunningTime="2025-12-12 18:34:59.338113621 +0000 UTC m=+45.591394739" Dec 12 18:34:59.589417 systemd-networkd[1428]: cali7abf89f0327: Gained IPv6LL Dec 12 18:34:59.600188 containerd[1494]: time="2025-12-12T18:34:59.599948771Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:34:59.600806 containerd[1494]: time="2025-12-12T18:34:59.600680766Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:34:59.600806 containerd[1494]: time="2025-12-12T18:34:59.600775316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:34:59.601343 kubelet[2688]: E1212 18:34:59.601006 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:34:59.601343 kubelet[2688]: E1212 18:34:59.601062 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:34:59.601795 kubelet[2688]: E1212 18:34:59.601678 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlvsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-g2sqb_calico-system(c6e78d63-2cda-428b-a981-9d8b48e5f776): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:34:59.602162 containerd[1494]: time="2025-12-12T18:34:59.602123627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:34:59.977548 containerd[1494]: time="2025-12-12T18:34:59.977437107Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:34:59.978340 containerd[1494]: time="2025-12-12T18:34:59.978216203Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:34:59.978809 containerd[1494]: time="2025-12-12T18:34:59.978275521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:34:59.978886 kubelet[2688]: E1212 18:34:59.978586 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:34:59.978886 kubelet[2688]: E1212 18:34:59.978637 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:34:59.979313 containerd[1494]: time="2025-12-12T18:34:59.979286305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:34:59.979824 kubelet[2688]: E1212 18:34:59.979693 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4krg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6475b48c59-d227n_calico-apiserver(7961b513-fe6b-4e9c-af45-39f62e7bf7e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:34:59.981819 kubelet[2688]: E1212 18:34:59.981740 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6475b48c59-d227n" podUID="7961b513-fe6b-4e9c-af45-39f62e7bf7e0" Dec 12 18:35:00.100812 systemd-networkd[1428]: calibde62e6068d: Gained IPv6LL Dec 12 18:35:00.289407 kubelet[2688]: E1212 18:35:00.288762 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:35:00.291576 kubelet[2688]: E1212 18:35:00.291010 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:35:00.292864 kubelet[2688]: E1212 18:35:00.292191 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6475b48c59-d227n" podUID="7961b513-fe6b-4e9c-af45-39f62e7bf7e0" Dec 12 18:35:00.293557 kubelet[2688]: E1212 18:35:00.293416 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7858cfdf57-zqtcq" podUID="e47b8144-038b-48bb-9d02-85c4035c0eac" Dec 12 18:35:00.352115 containerd[1494]: time="2025-12-12T18:35:00.352054255Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:00.354859 containerd[1494]: time="2025-12-12T18:35:00.353338207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:35:00.354859 containerd[1494]: time="2025-12-12T18:35:00.353395890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:35:00.355395 kubelet[2688]: E1212 18:35:00.355335 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:35:00.355520 kubelet[2688]: E1212 18:35:00.355405 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:35:00.355636 kubelet[2688]: E1212 18:35:00.355579 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlvsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-g2sqb_calico-system(c6e78d63-2cda-428b-a981-9d8b48e5f776): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:00.356418 systemd-networkd[1428]: cali3f93ff6284e: Gained IPv6LL Dec 12 18:35:00.358078 kubelet[2688]: E1212 18:35:00.356800 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g2sqb" podUID="c6e78d63-2cda-428b-a981-9d8b48e5f776" Dec 12 18:35:01.296678 kubelet[2688]: E1212 18:35:01.296619 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:35:01.299245 kubelet[2688]: E1212 18:35:01.298991 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:35:01.301526 kubelet[2688]: E1212 18:35:01.301320 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g2sqb" podUID="c6e78d63-2cda-428b-a981-9d8b48e5f776" Dec 12 18:35:02.301973 kubelet[2688]: E1212 18:35:02.301912 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:35:02.302830 kubelet[2688]: E1212 18:35:02.302560 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:35:06.377356 kubelet[2688]: I1212 18:35:06.376610 2688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:35:06.377356 kubelet[2688]: E1212 18:35:06.377120 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:35:07.320918 kubelet[2688]: E1212 18:35:07.320305 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:35:10.897438 containerd[1494]: time="2025-12-12T18:35:10.897159276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:35:11.234387 containerd[1494]: time="2025-12-12T18:35:11.234209742Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:11.235256 containerd[1494]: time="2025-12-12T18:35:11.235191903Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:35:11.235360 containerd[1494]: time="2025-12-12T18:35:11.235321228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:35:11.235564 kubelet[2688]: E1212 18:35:11.235511 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:35:11.236069 kubelet[2688]: E1212 18:35:11.235576 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:35:11.236069 kubelet[2688]: E1212 18:35:11.235714 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dbe0f4e753c849fd967bc2966515448f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d82md,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66976c6d85-gwdlp_calico-system(d054c3a5-425b-4b52-9c15-9b92c6d5d874): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:11.239555 containerd[1494]: time="2025-12-12T18:35:11.239504675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:35:11.589118 containerd[1494]: time="2025-12-12T18:35:11.588895588Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:11.590495 containerd[1494]: time="2025-12-12T18:35:11.590408388Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:35:11.591263 containerd[1494]: time="2025-12-12T18:35:11.590448325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:35:11.591318 kubelet[2688]: E1212 18:35:11.590869 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:35:11.591318 kubelet[2688]: E1212 18:35:11.590941 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:35:11.591318 kubelet[2688]: E1212 18:35:11.591088 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d82md,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66976c6d85-gwdlp_calico-system(d054c3a5-425b-4b52-9c15-9b92c6d5d874): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:11.592854 kubelet[2688]: E1212 18:35:11.592777 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66976c6d85-gwdlp" podUID="d054c3a5-425b-4b52-9c15-9b92c6d5d874" Dec 12 18:35:12.899337 containerd[1494]: time="2025-12-12T18:35:12.898380184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:35:13.240991 containerd[1494]: time="2025-12-12T18:35:13.240835341Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:13.241903 containerd[1494]: time="2025-12-12T18:35:13.241833607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:35:13.242040 containerd[1494]: time="2025-12-12T18:35:13.241943889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:35:13.242263 kubelet[2688]: E1212 18:35:13.242182 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:35:13.243064 kubelet[2688]: E1212 18:35:13.242278 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:35:13.243104 containerd[1494]: time="2025-12-12T18:35:13.242693644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:35:13.243879 kubelet[2688]: E1212 18:35:13.243719 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f5fvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g4nz4_calico-system(c02cbe9e-1b81-42e6-bc64-5bb369970158): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:13.245381 kubelet[2688]: E1212 18:35:13.245193 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g4nz4" podUID="c02cbe9e-1b81-42e6-bc64-5bb369970158" Dec 12 18:35:13.581386 containerd[1494]: time="2025-12-12T18:35:13.581219371Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:13.582902 containerd[1494]: time="2025-12-12T18:35:13.582794278Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:35:13.583060 containerd[1494]: time="2025-12-12T18:35:13.582822226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:35:13.583446 kubelet[2688]: E1212 18:35:13.583386 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:35:13.584023 kubelet[2688]: E1212 18:35:13.583462 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:35:13.584023 kubelet[2688]: E1212 18:35:13.583769 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2zmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7858cfdf57-zqtcq_calico-system(e47b8144-038b-48bb-9d02-85c4035c0eac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:13.584404 containerd[1494]: time="2025-12-12T18:35:13.584366364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:35:13.585040 kubelet[2688]: E1212 18:35:13.585002 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7858cfdf57-zqtcq" podUID="e47b8144-038b-48bb-9d02-85c4035c0eac" Dec 12 18:35:13.957726 containerd[1494]: time="2025-12-12T18:35:13.957643542Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:13.958775 containerd[1494]: time="2025-12-12T18:35:13.958383291Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:35:13.958775 containerd[1494]: time="2025-12-12T18:35:13.958470684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:35:13.961366 kubelet[2688]: E1212 18:35:13.961302 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:35:13.962893 kubelet[2688]: E1212 18:35:13.961589 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:35:13.962893 kubelet[2688]: E1212 18:35:13.961866 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4krg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6475b48c59-d227n_calico-apiserver(7961b513-fe6b-4e9c-af45-39f62e7bf7e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:13.963273 containerd[1494]: time="2025-12-12T18:35:13.962559045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:35:13.963570 kubelet[2688]: E1212 18:35:13.963528 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6475b48c59-d227n" podUID="7961b513-fe6b-4e9c-af45-39f62e7bf7e0" Dec 12 18:35:14.304395 containerd[1494]: time="2025-12-12T18:35:14.304213156Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:14.305712 containerd[1494]: time="2025-12-12T18:35:14.305661558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:35:14.305879 containerd[1494]: time="2025-12-12T18:35:14.305772530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:35:14.306122 kubelet[2688]: E1212 18:35:14.306075 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:35:14.307036 kubelet[2688]: E1212 18:35:14.306499 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:35:14.307311 kubelet[2688]: E1212 18:35:14.307191 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llqtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6475b48c59-l57rn_calico-apiserver(0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:14.308633 kubelet[2688]: E1212 18:35:14.308580 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6475b48c59-l57rn" podUID="0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29" Dec 12 18:35:16.341207 systemd[1]: Started sshd@7-134.199.220.206:22-147.75.109.163:34886.service - OpenSSH per-connection server daemon (147.75.109.163:34886). Dec 12 18:35:16.490558 sshd[4671]: Accepted publickey for core from 147.75.109.163 port 34886 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:35:16.494000 sshd-session[4671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:16.504109 systemd-logind[1468]: New session 8 of user core. Dec 12 18:35:16.511578 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:35:16.899636 containerd[1494]: time="2025-12-12T18:35:16.899526921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:35:17.168990 sshd[4674]: Connection closed by 147.75.109.163 port 34886 Dec 12 18:35:17.170022 sshd-session[4671]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:17.181531 systemd[1]: sshd@7-134.199.220.206:22-147.75.109.163:34886.service: Deactivated successfully. Dec 12 18:35:17.186894 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:35:17.191856 systemd-logind[1468]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:35:17.196505 systemd-logind[1468]: Removed session 8. Dec 12 18:35:17.248246 containerd[1494]: time="2025-12-12T18:35:17.248177363Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:17.249944 containerd[1494]: time="2025-12-12T18:35:17.249871242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:35:17.250101 containerd[1494]: time="2025-12-12T18:35:17.249998942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:35:17.250267 kubelet[2688]: E1212 18:35:17.250203 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:35:17.250846 kubelet[2688]: E1212 18:35:17.250288 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:35:17.250846 kubelet[2688]: E1212 18:35:17.250442 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlvsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-g2sqb_calico-system(c6e78d63-2cda-428b-a981-9d8b48e5f776): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:17.252908 containerd[1494]: time="2025-12-12T18:35:17.252861117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:35:17.611158 containerd[1494]: time="2025-12-12T18:35:17.610838264Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:17.611850 containerd[1494]: time="2025-12-12T18:35:17.611797185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:35:17.612073 containerd[1494]: time="2025-12-12T18:35:17.611970283Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:35:17.613436 kubelet[2688]: E1212 18:35:17.612471 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:35:17.613436 kubelet[2688]: E1212 18:35:17.612528 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:35:17.613436 kubelet[2688]: E1212 18:35:17.612709 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlvsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-g2sqb_calico-system(c6e78d63-2cda-428b-a981-9d8b48e5f776): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:17.614192 kubelet[2688]: E1212 18:35:17.614126 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g2sqb" podUID="c6e78d63-2cda-428b-a981-9d8b48e5f776" Dec 12 18:35:22.185593 systemd[1]: Started sshd@8-134.199.220.206:22-147.75.109.163:56240.service - OpenSSH per-connection server daemon (147.75.109.163:56240). Dec 12 18:35:22.286257 sshd[4698]: Accepted publickey for core from 147.75.109.163 port 56240 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:35:22.289265 sshd-session[4698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:22.298338 systemd-logind[1468]: New session 9 of user core. Dec 12 18:35:22.306445 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:35:22.498857 sshd[4701]: Connection closed by 147.75.109.163 port 56240 Dec 12 18:35:22.499505 sshd-session[4698]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:22.508664 systemd[1]: sshd@8-134.199.220.206:22-147.75.109.163:56240.service: Deactivated successfully. Dec 12 18:35:22.512191 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:35:22.515265 systemd-logind[1468]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:35:22.518610 systemd-logind[1468]: Removed session 9. Dec 12 18:35:22.897511 kubelet[2688]: E1212 18:35:22.897462 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66976c6d85-gwdlp" podUID="d054c3a5-425b-4b52-9c15-9b92c6d5d874" Dec 12 18:35:24.898020 kubelet[2688]: E1212 18:35:24.897636 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6475b48c59-d227n" podUID="7961b513-fe6b-4e9c-af45-39f62e7bf7e0" Dec 12 18:35:25.899628 kubelet[2688]: E1212 18:35:25.899554 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6475b48c59-l57rn" podUID="0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29" Dec 12 18:35:26.898437 kubelet[2688]: E1212 18:35:26.898017 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g4nz4" podUID="c02cbe9e-1b81-42e6-bc64-5bb369970158" Dec 12 18:35:26.898808 kubelet[2688]: E1212 18:35:26.898779 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7858cfdf57-zqtcq" podUID="e47b8144-038b-48bb-9d02-85c4035c0eac" Dec 12 18:35:27.513502 systemd[1]: Started sshd@9-134.199.220.206:22-147.75.109.163:56248.service - OpenSSH per-connection server daemon (147.75.109.163:56248). Dec 12 18:35:27.646987 sshd[4714]: Accepted publickey for core from 147.75.109.163 port 56248 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:35:27.650081 sshd-session[4714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:27.662827 systemd-logind[1468]: New session 10 of user core. Dec 12 18:35:27.667520 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:35:27.954081 sshd[4717]: Connection closed by 147.75.109.163 port 56248 Dec 12 18:35:27.953941 sshd-session[4714]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:27.975553 systemd[1]: Started sshd@10-134.199.220.206:22-147.75.109.163:56258.service - OpenSSH per-connection server daemon (147.75.109.163:56258). Dec 12 18:35:27.976928 systemd[1]: sshd@9-134.199.220.206:22-147.75.109.163:56248.service: Deactivated successfully. Dec 12 18:35:27.984578 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:35:27.988431 systemd-logind[1468]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:35:27.997478 systemd-logind[1468]: Removed session 10. Dec 12 18:35:28.132256 sshd[4728]: Accepted publickey for core from 147.75.109.163 port 56258 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:35:28.133879 sshd-session[4728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:28.146432 systemd-logind[1468]: New session 11 of user core. Dec 12 18:35:28.151527 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:35:28.454504 sshd[4734]: Connection closed by 147.75.109.163 port 56258 Dec 12 18:35:28.457770 sshd-session[4728]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:28.471175 systemd[1]: sshd@10-134.199.220.206:22-147.75.109.163:56258.service: Deactivated successfully. Dec 12 18:35:28.479195 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:35:28.481904 systemd-logind[1468]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:35:28.487753 systemd-logind[1468]: Removed session 11. Dec 12 18:35:28.492026 systemd[1]: Started sshd@11-134.199.220.206:22-147.75.109.163:56274.service - OpenSSH per-connection server daemon (147.75.109.163:56274). Dec 12 18:35:28.601926 sshd[4744]: Accepted publickey for core from 147.75.109.163 port 56274 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:35:28.604164 sshd-session[4744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:28.613601 systemd-logind[1468]: New session 12 of user core. Dec 12 18:35:28.618702 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:35:28.799064 sshd[4747]: Connection closed by 147.75.109.163 port 56274 Dec 12 18:35:28.798494 sshd-session[4744]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:28.806152 systemd[1]: sshd@11-134.199.220.206:22-147.75.109.163:56274.service: Deactivated successfully. Dec 12 18:35:28.809566 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:35:28.812611 systemd-logind[1468]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:35:28.814666 systemd-logind[1468]: Removed session 12. Dec 12 18:35:31.904443 kubelet[2688]: E1212 18:35:31.904331 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g2sqb" podUID="c6e78d63-2cda-428b-a981-9d8b48e5f776" Dec 12 18:35:33.816031 systemd[1]: Started sshd@12-134.199.220.206:22-147.75.109.163:53940.service - OpenSSH per-connection server daemon (147.75.109.163:53940). Dec 12 18:35:33.898315 sshd[4761]: Accepted publickey for core from 147.75.109.163 port 53940 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:35:33.900489 sshd-session[4761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:33.908935 systemd-logind[1468]: New session 13 of user core. Dec 12 18:35:33.914584 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:35:34.082319 sshd[4764]: Connection closed by 147.75.109.163 port 53940 Dec 12 18:35:34.083768 sshd-session[4761]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:34.088568 systemd[1]: sshd@12-134.199.220.206:22-147.75.109.163:53940.service: Deactivated successfully. Dec 12 18:35:34.092109 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:35:34.094829 systemd-logind[1468]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:35:34.097780 systemd-logind[1468]: Removed session 13. Dec 12 18:35:37.899910 containerd[1494]: time="2025-12-12T18:35:37.899070621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:35:38.256723 containerd[1494]: time="2025-12-12T18:35:38.256431185Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:38.258245 containerd[1494]: time="2025-12-12T18:35:38.257842312Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:35:38.258245 containerd[1494]: time="2025-12-12T18:35:38.257843486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:35:38.259077 kubelet[2688]: E1212 18:35:38.258765 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:35:38.259077 kubelet[2688]: E1212 18:35:38.258830 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:35:38.259077 kubelet[2688]: E1212 18:35:38.259003 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dbe0f4e753c849fd967bc2966515448f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d82md,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66976c6d85-gwdlp_calico-system(d054c3a5-425b-4b52-9c15-9b92c6d5d874): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:38.262764 containerd[1494]: time="2025-12-12T18:35:38.261806760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:35:38.606630 containerd[1494]: time="2025-12-12T18:35:38.606193267Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:38.608552 containerd[1494]: time="2025-12-12T18:35:38.608482327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:35:38.608704 containerd[1494]: time="2025-12-12T18:35:38.608497869Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:35:38.608952 kubelet[2688]: E1212 18:35:38.608900 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:35:38.609046 kubelet[2688]: E1212 18:35:38.608966 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:35:38.611330 kubelet[2688]: E1212 18:35:38.609108 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d82md,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66976c6d85-gwdlp_calico-system(d054c3a5-425b-4b52-9c15-9b92c6d5d874): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:38.612056 kubelet[2688]: E1212 18:35:38.611696 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66976c6d85-gwdlp" podUID="d054c3a5-425b-4b52-9c15-9b92c6d5d874" Dec 12 18:35:38.898188 containerd[1494]: time="2025-12-12T18:35:38.898131429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:35:39.100616 systemd[1]: Started sshd@13-134.199.220.206:22-147.75.109.163:53954.service - OpenSSH per-connection server daemon (147.75.109.163:53954). Dec 12 18:35:39.179817 sshd[4813]: Accepted publickey for core from 147.75.109.163 port 53954 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:35:39.181334 sshd-session[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:39.192371 systemd-logind[1468]: New session 14 of user core. Dec 12 18:35:39.196777 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:35:39.241663 containerd[1494]: time="2025-12-12T18:35:39.241479919Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:39.242400 containerd[1494]: time="2025-12-12T18:35:39.242360120Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:35:39.242515 containerd[1494]: time="2025-12-12T18:35:39.242493145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:35:39.243024 kubelet[2688]: E1212 18:35:39.242834 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:35:39.243024 kubelet[2688]: E1212 18:35:39.242890 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:35:39.243286 kubelet[2688]: E1212 18:35:39.243208 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4krg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6475b48c59-d227n_calico-apiserver(7961b513-fe6b-4e9c-af45-39f62e7bf7e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:39.244412 kubelet[2688]: E1212 18:35:39.244346 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6475b48c59-d227n" podUID="7961b513-fe6b-4e9c-af45-39f62e7bf7e0" Dec 12 18:35:39.245367 containerd[1494]: time="2025-12-12T18:35:39.244567462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:35:39.395214 sshd[4816]: Connection closed by 147.75.109.163 port 53954 Dec 12 18:35:39.396193 sshd-session[4813]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:39.403469 systemd[1]: sshd@13-134.199.220.206:22-147.75.109.163:53954.service: Deactivated successfully. Dec 12 18:35:39.409022 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:35:39.417579 systemd-logind[1468]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:35:39.423609 systemd-logind[1468]: Removed session 14. Dec 12 18:35:39.583329 containerd[1494]: time="2025-12-12T18:35:39.583172533Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:39.584205 containerd[1494]: time="2025-12-12T18:35:39.584157495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:35:39.584329 containerd[1494]: time="2025-12-12T18:35:39.584295740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:35:39.584560 kubelet[2688]: E1212 18:35:39.584509 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:35:39.584892 kubelet[2688]: E1212 18:35:39.584577 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:35:39.584924 kubelet[2688]: E1212 18:35:39.584861 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2zmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7858cfdf57-zqtcq_calico-system(e47b8144-038b-48bb-9d02-85c4035c0eac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:39.586539 kubelet[2688]: E1212 18:35:39.586140 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7858cfdf57-zqtcq" podUID="e47b8144-038b-48bb-9d02-85c4035c0eac" Dec 12 18:35:39.586708 containerd[1494]: time="2025-12-12T18:35:39.586484027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:35:39.938694 containerd[1494]: time="2025-12-12T18:35:39.938620814Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:39.939654 containerd[1494]: time="2025-12-12T18:35:39.939592856Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:35:39.939842 containerd[1494]: time="2025-12-12T18:35:39.939653699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:35:39.940462 kubelet[2688]: E1212 18:35:39.940420 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:35:39.940564 kubelet[2688]: E1212 18:35:39.940471 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:35:39.940870 kubelet[2688]: E1212 18:35:39.940817 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f5fvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g4nz4_calico-system(c02cbe9e-1b81-42e6-bc64-5bb369970158): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:39.942414 kubelet[2688]: E1212 18:35:39.942338 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g4nz4" podUID="c02cbe9e-1b81-42e6-bc64-5bb369970158" Dec 12 18:35:39.942937 containerd[1494]: time="2025-12-12T18:35:39.942903232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:35:40.317218 containerd[1494]: time="2025-12-12T18:35:40.317068224Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:40.318356 containerd[1494]: time="2025-12-12T18:35:40.318119378Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:35:40.319351 containerd[1494]: time="2025-12-12T18:35:40.318153770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:35:40.319870 kubelet[2688]: E1212 18:35:40.319563 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:35:40.319870 kubelet[2688]: E1212 18:35:40.319626 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:35:40.319870 kubelet[2688]: E1212 18:35:40.319815 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llqtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6475b48c59-l57rn_calico-apiserver(0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:40.321038 kubelet[2688]: E1212 18:35:40.320983 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6475b48c59-l57rn" podUID="0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29" Dec 12 18:35:42.896682 kubelet[2688]: E1212 18:35:42.896630 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:35:42.898990 containerd[1494]: time="2025-12-12T18:35:42.898684206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:35:43.281959 containerd[1494]: time="2025-12-12T18:35:43.281169707Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:43.282392 containerd[1494]: time="2025-12-12T18:35:43.282307882Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:35:43.282581 containerd[1494]: time="2025-12-12T18:35:43.282361997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:35:43.283696 kubelet[2688]: E1212 18:35:43.282966 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:35:43.283696 kubelet[2688]: E1212 18:35:43.283025 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:35:43.283696 kubelet[2688]: E1212 18:35:43.283210 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlvsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-g2sqb_calico-system(c6e78d63-2cda-428b-a981-9d8b48e5f776): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:43.286668 containerd[1494]: time="2025-12-12T18:35:43.286471503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:35:43.600745 containerd[1494]: time="2025-12-12T18:35:43.600608404Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:43.602981 containerd[1494]: time="2025-12-12T18:35:43.602909812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:35:43.603267 containerd[1494]: time="2025-12-12T18:35:43.603128516Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:35:43.603704 kubelet[2688]: E1212 18:35:43.603639 2688 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:35:43.604330 kubelet[2688]: E1212 18:35:43.604260 2688 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:35:43.605274 kubelet[2688]: E1212 18:35:43.605025 2688 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlvsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-g2sqb_calico-system(c6e78d63-2cda-428b-a981-9d8b48e5f776): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:43.606973 kubelet[2688]: E1212 18:35:43.606863 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g2sqb" podUID="c6e78d63-2cda-428b-a981-9d8b48e5f776" Dec 12 18:35:44.417718 systemd[1]: Started sshd@14-134.199.220.206:22-147.75.109.163:39510.service - OpenSSH per-connection server daemon (147.75.109.163:39510). Dec 12 18:35:44.537017 sshd[4830]: Accepted publickey for core from 147.75.109.163 port 39510 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:35:44.539892 sshd-session[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:44.548151 systemd-logind[1468]: New session 15 of user core. Dec 12 18:35:44.554481 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:35:44.736080 sshd[4833]: Connection closed by 147.75.109.163 port 39510 Dec 12 18:35:44.737460 sshd-session[4830]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:44.746523 systemd[1]: sshd@14-134.199.220.206:22-147.75.109.163:39510.service: Deactivated successfully. Dec 12 18:35:44.752151 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:35:44.754620 systemd-logind[1468]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:35:44.757798 systemd-logind[1468]: Removed session 15. Dec 12 18:35:45.899218 kubelet[2688]: E1212 18:35:45.899176 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:35:49.751098 systemd[1]: Started sshd@15-134.199.220.206:22-147.75.109.163:39514.service - OpenSSH per-connection server daemon (147.75.109.163:39514). Dec 12 18:35:49.842769 sshd[4845]: Accepted publickey for core from 147.75.109.163 port 39514 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:35:49.845026 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:49.854406 systemd-logind[1468]: New session 16 of user core. Dec 12 18:35:49.860493 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:35:49.896004 kubelet[2688]: E1212 18:35:49.895345 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:35:49.897930 kubelet[2688]: E1212 18:35:49.897525 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:35:50.060187 sshd[4848]: Connection closed by 147.75.109.163 port 39514 Dec 12 18:35:50.060919 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:50.077920 systemd[1]: sshd@15-134.199.220.206:22-147.75.109.163:39514.service: Deactivated successfully. Dec 12 18:35:50.082715 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:35:50.085921 systemd-logind[1468]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:35:50.092377 systemd[1]: Started sshd@16-134.199.220.206:22-147.75.109.163:39520.service - OpenSSH per-connection server daemon (147.75.109.163:39520). Dec 12 18:35:50.097743 systemd-logind[1468]: Removed session 16. Dec 12 18:35:50.172122 sshd[4860]: Accepted publickey for core from 147.75.109.163 port 39520 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:35:50.175184 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:50.183456 systemd-logind[1468]: New session 17 of user core. Dec 12 18:35:50.191522 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:35:50.680212 sshd[4863]: Connection closed by 147.75.109.163 port 39520 Dec 12 18:35:50.683656 sshd-session[4860]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:50.704770 systemd[1]: sshd@16-134.199.220.206:22-147.75.109.163:39520.service: Deactivated successfully. Dec 12 18:35:50.712726 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:35:50.715139 systemd-logind[1468]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:35:50.722300 systemd-logind[1468]: Removed session 17. Dec 12 18:35:50.725103 systemd[1]: Started sshd@17-134.199.220.206:22-147.75.109.163:39522.service - OpenSSH per-connection server daemon (147.75.109.163:39522). Dec 12 18:35:50.907809 sshd[4874]: Accepted publickey for core from 147.75.109.163 port 39522 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:35:50.910679 sshd-session[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:50.918961 systemd-logind[1468]: New session 18 of user core. Dec 12 18:35:50.924542 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:35:51.788497 sshd[4877]: Connection closed by 147.75.109.163 port 39522 Dec 12 18:35:51.790367 sshd-session[4874]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:51.808162 systemd[1]: sshd@17-134.199.220.206:22-147.75.109.163:39522.service: Deactivated successfully. Dec 12 18:35:51.812834 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:35:51.815135 systemd-logind[1468]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:35:51.823395 systemd[1]: Started sshd@18-134.199.220.206:22-147.75.109.163:39524.service - OpenSSH per-connection server daemon (147.75.109.163:39524). Dec 12 18:35:51.826360 systemd-logind[1468]: Removed session 18. Dec 12 18:35:51.908997 kubelet[2688]: E1212 18:35:51.908937 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g4nz4" podUID="c02cbe9e-1b81-42e6-bc64-5bb369970158" Dec 12 18:35:51.910804 sshd[4894]: Accepted publickey for core from 147.75.109.163 port 39524 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:35:51.916966 sshd-session[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:51.928572 systemd-logind[1468]: New session 19 of user core. Dec 12 18:35:51.934536 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:35:52.430311 sshd[4899]: Connection closed by 147.75.109.163 port 39524 Dec 12 18:35:52.430758 sshd-session[4894]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:52.445726 systemd[1]: sshd@18-134.199.220.206:22-147.75.109.163:39524.service: Deactivated successfully. Dec 12 18:35:52.449005 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:35:52.453334 systemd-logind[1468]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:35:52.457777 systemd[1]: Started sshd@19-134.199.220.206:22-147.75.109.163:42782.service - OpenSSH per-connection server daemon (147.75.109.163:42782). Dec 12 18:35:52.462466 systemd-logind[1468]: Removed session 19. Dec 12 18:35:52.538008 sshd[4909]: Accepted publickey for core from 147.75.109.163 port 42782 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:35:52.540587 sshd-session[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:52.550600 systemd-logind[1468]: New session 20 of user core. Dec 12 18:35:52.560167 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:35:52.766522 sshd[4912]: Connection closed by 147.75.109.163 port 42782 Dec 12 18:35:52.769275 sshd-session[4909]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:52.780438 systemd-logind[1468]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:35:52.781579 systemd[1]: sshd@19-134.199.220.206:22-147.75.109.163:42782.service: Deactivated successfully. Dec 12 18:35:52.790377 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:35:52.801079 systemd-logind[1468]: Removed session 20. Dec 12 18:35:52.900926 kubelet[2688]: E1212 18:35:52.900757 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66976c6d85-gwdlp" podUID="d054c3a5-425b-4b52-9c15-9b92c6d5d874" Dec 12 18:35:53.900081 kubelet[2688]: E1212 18:35:53.898439 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7858cfdf57-zqtcq" podUID="e47b8144-038b-48bb-9d02-85c4035c0eac" Dec 12 18:35:53.900081 kubelet[2688]: E1212 18:35:53.898864 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6475b48c59-d227n" podUID="7961b513-fe6b-4e9c-af45-39f62e7bf7e0" Dec 12 18:35:54.935620 kubelet[2688]: E1212 18:35:54.935519 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6475b48c59-l57rn" podUID="0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29" Dec 12 18:35:56.898152 kubelet[2688]: E1212 18:35:56.898079 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g2sqb" podUID="c6e78d63-2cda-428b-a981-9d8b48e5f776" Dec 12 18:35:57.785731 systemd[1]: Started sshd@20-134.199.220.206:22-147.75.109.163:42788.service - OpenSSH per-connection server daemon (147.75.109.163:42788). Dec 12 18:35:57.884040 sshd[4924]: Accepted publickey for core from 147.75.109.163 port 42788 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:35:57.886101 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:57.897332 systemd-logind[1468]: New session 21 of user core. Dec 12 18:35:57.902554 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 18:35:58.142448 sshd[4927]: Connection closed by 147.75.109.163 port 42788 Dec 12 18:35:58.143512 sshd-session[4924]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:58.151286 systemd-logind[1468]: Session 21 logged out. Waiting for processes to exit. Dec 12 18:35:58.152022 systemd[1]: sshd@20-134.199.220.206:22-147.75.109.163:42788.service: Deactivated successfully. Dec 12 18:35:58.156081 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 18:35:58.163244 systemd-logind[1468]: Removed session 21. Dec 12 18:36:03.164745 systemd[1]: Started sshd@21-134.199.220.206:22-147.75.109.163:47022.service - OpenSSH per-connection server daemon (147.75.109.163:47022). Dec 12 18:36:03.307910 sshd[4943]: Accepted publickey for core from 147.75.109.163 port 47022 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:36:03.311220 sshd-session[4943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:36:03.324911 systemd-logind[1468]: New session 22 of user core. Dec 12 18:36:03.330605 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 18:36:03.696370 sshd[4946]: Connection closed by 147.75.109.163 port 47022 Dec 12 18:36:03.697562 sshd-session[4943]: pam_unix(sshd:session): session closed for user core Dec 12 18:36:03.706421 systemd[1]: sshd@21-134.199.220.206:22-147.75.109.163:47022.service: Deactivated successfully. Dec 12 18:36:03.710172 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 18:36:03.711722 systemd-logind[1468]: Session 22 logged out. Waiting for processes to exit. Dec 12 18:36:03.714007 systemd-logind[1468]: Removed session 22. Dec 12 18:36:04.898758 kubelet[2688]: E1212 18:36:04.898261 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6475b48c59-d227n" podUID="7961b513-fe6b-4e9c-af45-39f62e7bf7e0" Dec 12 18:36:04.898758 kubelet[2688]: E1212 18:36:04.898694 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g4nz4" podUID="c02cbe9e-1b81-42e6-bc64-5bb369970158" Dec 12 18:36:04.925799 kubelet[2688]: E1212 18:36:04.924921 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66976c6d85-gwdlp" podUID="d054c3a5-425b-4b52-9c15-9b92c6d5d874" Dec 12 18:36:05.898720 kubelet[2688]: E1212 18:36:05.898675 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:36:07.901188 kubelet[2688]: E1212 18:36:07.901129 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7858cfdf57-zqtcq" podUID="e47b8144-038b-48bb-9d02-85c4035c0eac" Dec 12 18:36:07.903765 kubelet[2688]: E1212 18:36:07.902138 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g2sqb" podUID="c6e78d63-2cda-428b-a981-9d8b48e5f776" Dec 12 18:36:08.719733 systemd[1]: Started sshd@22-134.199.220.206:22-147.75.109.163:47034.service - OpenSSH per-connection server daemon (147.75.109.163:47034). Dec 12 18:36:08.822800 sshd[4985]: Accepted publickey for core from 147.75.109.163 port 47034 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:36:08.825317 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:36:08.833278 systemd-logind[1468]: New session 23 of user core. Dec 12 18:36:08.840560 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 18:36:08.897576 kubelet[2688]: E1212 18:36:08.897507 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6475b48c59-l57rn" podUID="0e14d6f7-4179-48cd-a9f8-ec5f09bb3e29" Dec 12 18:36:09.054186 sshd[4988]: Connection closed by 147.75.109.163 port 47034 Dec 12 18:36:09.057442 sshd-session[4985]: pam_unix(sshd:session): session closed for user core Dec 12 18:36:09.062836 systemd[1]: sshd@22-134.199.220.206:22-147.75.109.163:47034.service: Deactivated successfully. Dec 12 18:36:09.068593 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 18:36:09.070552 systemd-logind[1468]: Session 23 logged out. Waiting for processes to exit. Dec 12 18:36:09.072040 systemd-logind[1468]: Removed session 23. Dec 12 18:36:10.896879 kubelet[2688]: E1212 18:36:10.896718 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"