Dec 16 13:07:05.984167 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:07:05.984214 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:07:05.986299 kernel: BIOS-provided physical RAM map: Dec 16 13:07:05.986346 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 16 13:07:05.986358 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 16 13:07:05.986369 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 13:07:05.986377 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Dec 16 13:07:05.986395 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Dec 16 13:07:05.986401 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 13:07:05.986411 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 13:07:05.986425 kernel: NX (Execute Disable) protection: active Dec 16 13:07:05.986447 kernel: APIC: Static calls initialized Dec 16 13:07:05.986458 kernel: SMBIOS 2.8 present. Dec 16 13:07:05.986470 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Dec 16 13:07:05.986484 kernel: DMI: Memory slots populated: 1/1 Dec 16 13:07:05.986491 kernel: Hypervisor detected: KVM Dec 16 13:07:05.986511 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 16 13:07:05.986522 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 13:07:05.986533 kernel: kvm-clock: using sched offset of 5540542965 cycles Dec 16 13:07:05.986546 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 13:07:05.986559 kernel: tsc: Detected 1995.312 MHz processor Dec 16 13:07:05.986573 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:07:05.986585 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:07:05.986592 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 16 13:07:05.986600 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 13:07:05.986608 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:07:05.986619 kernel: ACPI: Early table checksum verification disabled Dec 16 13:07:05.986627 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Dec 16 13:07:05.986634 kernel: ACPI: RSDT 0x000000007FFE19FD 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:07:05.986642 kernel: ACPI: FACP 0x000000007FFE17E1 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:07:05.986650 kernel: ACPI: DSDT 0x000000007FFE0040 0017A1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:07:05.986657 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 16 13:07:05.986665 kernel: ACPI: APIC 0x000000007FFE1855 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:07:05.986672 kernel: ACPI: HPET 0x000000007FFE18D5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:07:05.986682 kernel: ACPI: SRAT 0x000000007FFE190D 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:07:05.986690 kernel: ACPI: WAET 0x000000007FFE19D5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:07:05.986697 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe17e1-0x7ffe1854] Dec 16 13:07:05.986705 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe17e0] Dec 16 13:07:05.986712 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 16 13:07:05.986720 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe1855-0x7ffe18d4] Dec 16 13:07:05.986731 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe18d5-0x7ffe190c] Dec 16 13:07:05.986742 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe190d-0x7ffe19d4] Dec 16 13:07:05.986750 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe19d5-0x7ffe19fc] Dec 16 13:07:05.986758 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 16 13:07:05.986765 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 16 13:07:05.986773 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Dec 16 13:07:05.986781 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Dec 16 13:07:05.986789 kernel: Zone ranges: Dec 16 13:07:05.986799 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:07:05.986807 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Dec 16 13:07:05.986815 kernel: Normal empty Dec 16 13:07:05.986823 kernel: Device empty Dec 16 13:07:05.986830 kernel: Movable zone start for each node Dec 16 13:07:05.986838 kernel: Early memory node ranges Dec 16 13:07:05.986846 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 13:07:05.986854 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Dec 16 13:07:05.986861 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Dec 16 13:07:05.986869 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:07:05.986879 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 13:07:05.986887 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Dec 16 13:07:05.986895 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 13:07:05.986908 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 13:07:05.986916 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:07:05.986927 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 13:07:05.986935 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 13:07:05.986942 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:07:05.986954 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 13:07:05.986964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 13:07:05.986973 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:07:05.986980 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 13:07:05.986988 kernel: TSC deadline timer available Dec 16 13:07:05.986996 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:07:05.987004 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:07:05.987011 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:07:05.987019 kernel: CPU topo: Max. threads per core: 1 Dec 16 13:07:05.987027 kernel: CPU topo: Num. cores per package: 2 Dec 16 13:07:05.987041 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:07:05.987052 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:07:05.987064 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 13:07:05.987075 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 16 13:07:05.987086 kernel: Booting paravirtualized kernel on KVM Dec 16 13:07:05.987098 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:07:05.987110 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:07:05.987121 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:07:05.987132 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:07:05.987144 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:07:05.987160 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 16 13:07:05.987174 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:07:05.987188 kernel: random: crng init done Dec 16 13:07:05.987200 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 13:07:05.987212 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:07:05.987224 kernel: Fallback order for Node 0: 0 Dec 16 13:07:05.987269 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Dec 16 13:07:05.987282 kernel: Policy zone: DMA32 Dec 16 13:07:05.987299 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:07:05.987312 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:07:05.987325 kernel: Kernel/User page tables isolation: enabled Dec 16 13:07:05.987338 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:07:05.987350 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:07:05.987363 kernel: Dynamic Preempt: voluntary Dec 16 13:07:05.987376 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:07:05.987392 kernel: rcu: RCU event tracing is enabled. Dec 16 13:07:05.987401 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:07:05.987419 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:07:05.987427 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:07:05.987436 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:07:05.987444 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:07:05.987452 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:07:05.987460 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:07:05.987472 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:07:05.987481 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:07:05.987489 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 13:07:05.987503 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:07:05.987511 kernel: Console: colour VGA+ 80x25 Dec 16 13:07:05.987519 kernel: printk: legacy console [tty0] enabled Dec 16 13:07:05.987527 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:07:05.987535 kernel: ACPI: Core revision 20240827 Dec 16 13:07:05.987543 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 13:07:05.987569 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:07:05.987584 kernel: x2apic enabled Dec 16 13:07:05.987593 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:07:05.987601 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 13:07:05.987610 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Dec 16 13:07:05.987622 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) Dec 16 13:07:05.987637 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 13:07:05.987646 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 13:07:05.987655 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:07:05.987663 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:07:05.987678 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:07:05.987687 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 16 13:07:05.987696 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 13:07:05.987704 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 13:07:05.987713 kernel: MDS: Mitigation: Clear CPU buffers Dec 16 13:07:05.987721 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 13:07:05.987730 kernel: active return thunk: its_return_thunk Dec 16 13:07:05.987738 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 13:07:05.987747 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:07:05.987762 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:07:05.987772 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:07:05.987791 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:07:05.987803 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 16 13:07:05.987817 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:07:05.987831 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:07:05.987844 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:07:05.987857 kernel: landlock: Up and running. Dec 16 13:07:05.987870 kernel: SELinux: Initializing. Dec 16 13:07:05.987893 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 13:07:05.987902 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 13:07:05.987910 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Dec 16 13:07:05.987919 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Dec 16 13:07:05.987928 kernel: signal: max sigframe size: 1776 Dec 16 13:07:05.987936 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:07:05.987945 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:07:05.987983 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:07:05.987992 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 13:07:05.988008 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:07:05.988026 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:07:05.988040 kernel: .... node #0, CPUs: #1 Dec 16 13:07:05.988056 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:07:05.988067 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Dec 16 13:07:05.988086 kernel: Memory: 1958716K/2096612K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 133332K reserved, 0K cma-reserved) Dec 16 13:07:05.988171 kernel: devtmpfs: initialized Dec 16 13:07:05.988183 kernel: x86/mm: Memory block size: 128MB Dec 16 13:07:05.988199 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:07:05.988223 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:07:05.992315 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:07:05.992381 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:07:05.992392 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:07:05.992403 kernel: audit: type=2000 audit(1765890421.851:1): state=initialized audit_enabled=0 res=1 Dec 16 13:07:05.992418 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:07:05.992432 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:07:05.992446 kernel: cpuidle: using governor menu Dec 16 13:07:05.992460 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:07:05.992499 kernel: dca service started, version 1.12.1 Dec 16 13:07:05.992513 kernel: PCI: Using configuration type 1 for base access Dec 16 13:07:05.992527 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:07:05.992540 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:07:05.992553 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:07:05.992567 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:07:05.992578 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:07:05.992586 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:07:05.992595 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:07:05.992617 kernel: ACPI: Interpreter enabled Dec 16 13:07:05.992631 kernel: ACPI: PM: (supports S0 S5) Dec 16 13:07:05.992646 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:07:05.992660 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:07:05.992669 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 13:07:05.992678 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 16 13:07:05.992686 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 13:07:05.993065 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 16 13:07:05.993204 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 16 13:07:05.993338 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 16 13:07:05.993350 kernel: acpiphp: Slot [3] registered Dec 16 13:07:05.993362 kernel: acpiphp: Slot [4] registered Dec 16 13:07:05.993380 kernel: acpiphp: Slot [5] registered Dec 16 13:07:05.993391 kernel: acpiphp: Slot [6] registered Dec 16 13:07:05.993404 kernel: acpiphp: Slot [7] registered Dec 16 13:07:05.993417 kernel: acpiphp: Slot [8] registered Dec 16 13:07:05.993443 kernel: acpiphp: Slot [9] registered Dec 16 13:07:05.993456 kernel: acpiphp: Slot [10] registered Dec 16 13:07:05.993469 kernel: acpiphp: Slot [11] registered Dec 16 13:07:05.993483 kernel: acpiphp: Slot [12] registered Dec 16 13:07:05.993496 kernel: acpiphp: Slot [13] registered Dec 16 13:07:05.993510 kernel: acpiphp: Slot [14] registered Dec 16 13:07:05.993520 kernel: acpiphp: Slot [15] registered Dec 16 13:07:05.993534 kernel: acpiphp: Slot [16] registered Dec 16 13:07:05.993546 kernel: acpiphp: Slot [17] registered Dec 16 13:07:05.993561 kernel: acpiphp: Slot [18] registered Dec 16 13:07:05.993585 kernel: acpiphp: Slot [19] registered Dec 16 13:07:05.993594 kernel: acpiphp: Slot [20] registered Dec 16 13:07:05.993602 kernel: acpiphp: Slot [21] registered Dec 16 13:07:05.993611 kernel: acpiphp: Slot [22] registered Dec 16 13:07:05.993619 kernel: acpiphp: Slot [23] registered Dec 16 13:07:05.993628 kernel: acpiphp: Slot [24] registered Dec 16 13:07:05.993637 kernel: acpiphp: Slot [25] registered Dec 16 13:07:05.993645 kernel: acpiphp: Slot [26] registered Dec 16 13:07:05.993654 kernel: acpiphp: Slot [27] registered Dec 16 13:07:05.993669 kernel: acpiphp: Slot [28] registered Dec 16 13:07:05.993678 kernel: acpiphp: Slot [29] registered Dec 16 13:07:05.993686 kernel: acpiphp: Slot [30] registered Dec 16 13:07:05.993695 kernel: acpiphp: Slot [31] registered Dec 16 13:07:05.993703 kernel: PCI host bridge to bus 0000:00 Dec 16 13:07:05.993898 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 13:07:05.994023 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 13:07:05.994127 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 13:07:05.996411 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 16 13:07:05.996629 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 16 13:07:05.996751 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 13:07:05.996944 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 16 13:07:05.997131 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Dec 16 13:07:05.997337 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Dec 16 13:07:05.997492 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Dec 16 13:07:05.997620 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Dec 16 13:07:05.997747 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Dec 16 13:07:05.997874 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Dec 16 13:07:05.998000 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Dec 16 13:07:05.998151 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Dec 16 13:07:06.000474 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Dec 16 13:07:06.000720 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Dec 16 13:07:06.000865 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 16 13:07:06.001000 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 16 13:07:06.001145 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Dec 16 13:07:06.001328 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Dec 16 13:07:06.001463 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Dec 16 13:07:06.001631 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Dec 16 13:07:06.001759 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Dec 16 13:07:06.001888 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 13:07:06.002049 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 13:07:06.002179 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Dec 16 13:07:06.004513 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Dec 16 13:07:06.004708 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Dec 16 13:07:06.004917 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 13:07:06.005052 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Dec 16 13:07:06.005179 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Dec 16 13:07:06.005339 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 16 13:07:06.005492 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Dec 16 13:07:06.005658 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Dec 16 13:07:06.005806 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Dec 16 13:07:06.005957 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 16 13:07:06.006106 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 13:07:06.013322 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Dec 16 13:07:06.013764 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Dec 16 13:07:06.013933 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Dec 16 13:07:06.014117 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 13:07:06.014311 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Dec 16 13:07:06.014495 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Dec 16 13:07:06.014636 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Dec 16 13:07:06.014824 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 13:07:06.014964 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Dec 16 13:07:06.015106 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Dec 16 13:07:06.015125 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 13:07:06.015162 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 13:07:06.015175 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 13:07:06.015188 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 13:07:06.015201 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 16 13:07:06.015215 kernel: iommu: Default domain type: Translated Dec 16 13:07:06.015229 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:07:06.015270 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:07:06.015284 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 13:07:06.015298 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 16 13:07:06.015322 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Dec 16 13:07:06.015458 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 16 13:07:06.015552 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 16 13:07:06.015646 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 13:07:06.015657 kernel: vgaarb: loaded Dec 16 13:07:06.015666 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 13:07:06.015675 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 13:07:06.015684 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 13:07:06.015693 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:07:06.015710 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:07:06.015718 kernel: pnp: PnP ACPI init Dec 16 13:07:06.015727 kernel: pnp: PnP ACPI: found 4 devices Dec 16 13:07:06.015737 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:07:06.015746 kernel: NET: Registered PF_INET protocol family Dec 16 13:07:06.015754 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 13:07:06.015763 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 13:07:06.015773 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:07:06.015782 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:07:06.015797 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 13:07:06.015806 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 13:07:06.015815 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 13:07:06.015823 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 13:07:06.015832 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:07:06.015841 kernel: NET: Registered PF_XDP protocol family Dec 16 13:07:06.015937 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 13:07:06.016024 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 13:07:06.016152 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 13:07:06.016324 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 16 13:07:06.016442 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 16 13:07:06.016560 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 16 13:07:06.016661 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 16 13:07:06.016675 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 16 13:07:06.016803 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 37462 usecs Dec 16 13:07:06.016817 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:07:06.016827 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 13:07:06.016850 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Dec 16 13:07:06.016859 kernel: Initialise system trusted keyrings Dec 16 13:07:06.016869 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 13:07:06.016878 kernel: Key type asymmetric registered Dec 16 13:07:06.016887 kernel: Asymmetric key parser 'x509' registered Dec 16 13:07:06.016896 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:07:06.016905 kernel: io scheduler mq-deadline registered Dec 16 13:07:06.016914 kernel: io scheduler kyber registered Dec 16 13:07:06.016929 kernel: io scheduler bfq registered Dec 16 13:07:06.016938 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:07:06.016947 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 16 13:07:06.016956 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 16 13:07:06.016964 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 16 13:07:06.016975 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:07:06.016988 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:07:06.017001 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 13:07:06.017014 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 13:07:06.017035 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 13:07:06.017044 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 13:07:06.017168 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 13:07:06.017302 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 13:07:06.017391 kernel: rtc_cmos 00:03: setting system clock to 2025-12-16T13:07:05 UTC (1765890425) Dec 16 13:07:06.017478 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 16 13:07:06.017489 kernel: intel_pstate: CPU model not supported Dec 16 13:07:06.017498 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:07:06.017553 kernel: Segment Routing with IPv6 Dec 16 13:07:06.017562 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:07:06.017571 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:07:06.017580 kernel: Key type dns_resolver registered Dec 16 13:07:06.017589 kernel: IPI shorthand broadcast: enabled Dec 16 13:07:06.017603 kernel: sched_clock: Marking stable (4132003854, 261847276)->(4463319377, -69468247) Dec 16 13:07:06.017613 kernel: registered taskstats version 1 Dec 16 13:07:06.017621 kernel: Loading compiled-in X.509 certificates Dec 16 13:07:06.017630 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:07:06.017646 kernel: Demotion targets for Node 0: null Dec 16 13:07:06.017654 kernel: Key type .fscrypt registered Dec 16 13:07:06.017663 kernel: Key type fscrypt-provisioning registered Dec 16 13:07:06.017720 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:07:06.017743 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:07:06.017753 kernel: ima: No architecture policies found Dec 16 13:07:06.017762 kernel: clk: Disabling unused clocks Dec 16 13:07:06.017779 kernel: Warning: unable to open an initial console. Dec 16 13:07:06.017793 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:07:06.017840 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:07:06.017856 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:07:06.017871 kernel: Run /init as init process Dec 16 13:07:06.017887 kernel: with arguments: Dec 16 13:07:06.017904 kernel: /init Dec 16 13:07:06.017912 kernel: with environment: Dec 16 13:07:06.017921 kernel: HOME=/ Dec 16 13:07:06.017930 kernel: TERM=linux Dec 16 13:07:06.017942 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:07:06.017969 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:07:06.017980 systemd[1]: Detected virtualization kvm. Dec 16 13:07:06.017994 systemd[1]: Detected architecture x86-64. Dec 16 13:07:06.018009 systemd[1]: Running in initrd. Dec 16 13:07:06.018023 systemd[1]: No hostname configured, using default hostname. Dec 16 13:07:06.018037 systemd[1]: Hostname set to . Dec 16 13:07:06.018052 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:07:06.018076 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:07:06.018091 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:07:06.018105 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:07:06.018122 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:07:06.018155 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:07:06.018172 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:07:06.018191 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:07:06.018203 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:07:06.018219 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:07:06.018228 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:07:06.018264 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:07:06.018276 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:07:06.018299 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:07:06.018312 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:07:06.018325 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:07:06.018339 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:07:06.018355 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:07:06.018372 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:07:06.018388 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:07:06.018404 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:07:06.018421 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:07:06.018440 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:07:06.018450 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:07:06.018460 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:07:06.018470 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:07:06.018479 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:07:06.018489 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:07:06.018498 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:07:06.018508 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:07:06.018524 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:07:06.018533 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:07:06.018542 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:07:06.018552 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:07:06.018562 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:07:06.018578 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:07:06.018655 systemd-journald[193]: Collecting audit messages is disabled. Dec 16 13:07:06.018683 systemd-journald[193]: Journal started Dec 16 13:07:06.018714 systemd-journald[193]: Runtime Journal (/run/log/journal/a4a204eade1c42788084e40f8be92135) is 4.9M, max 39.2M, 34.3M free. Dec 16 13:07:05.970328 systemd-modules-load[194]: Inserted module 'overlay' Dec 16 13:07:06.069572 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:07:06.069636 kernel: Bridge firewalling registered Dec 16 13:07:06.069657 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:07:06.039817 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 16 13:07:06.073585 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:07:06.075919 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:07:06.083195 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:07:06.086470 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:07:06.097810 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:07:06.100770 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:07:06.108770 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:07:06.116328 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:07:06.128717 systemd-tmpfiles[213]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:07:06.135137 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:07:06.137972 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:07:06.143465 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:07:06.149462 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:07:06.165994 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:07:06.192170 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:07:06.214540 systemd-resolved[231]: Positive Trust Anchors: Dec 16 13:07:06.214560 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:07:06.214596 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:07:06.217796 systemd-resolved[231]: Defaulting to hostname 'linux'. Dec 16 13:07:06.221711 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:07:06.223012 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:07:06.347370 kernel: SCSI subsystem initialized Dec 16 13:07:06.360298 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:07:06.375303 kernel: iscsi: registered transport (tcp) Dec 16 13:07:06.408133 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:07:06.408286 kernel: QLogic iSCSI HBA Driver Dec 16 13:07:06.439508 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:07:06.466827 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:07:06.470329 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:07:06.544502 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:07:06.548454 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:07:06.611311 kernel: raid6: avx2x4 gen() 24739 MB/s Dec 16 13:07:06.629303 kernel: raid6: avx2x2 gen() 23371 MB/s Dec 16 13:07:06.648357 kernel: raid6: avx2x1 gen() 15735 MB/s Dec 16 13:07:06.648462 kernel: raid6: using algorithm avx2x4 gen() 24739 MB/s Dec 16 13:07:06.668292 kernel: raid6: .... xor() 8024 MB/s, rmw enabled Dec 16 13:07:06.668413 kernel: raid6: using avx2x2 recovery algorithm Dec 16 13:07:06.698306 kernel: xor: automatically using best checksumming function avx Dec 16 13:07:06.887302 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:07:06.899712 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:07:06.904658 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:07:06.947500 systemd-udevd[441]: Using default interface naming scheme 'v255'. Dec 16 13:07:06.955119 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:07:06.960221 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:07:06.994434 dracut-pre-trigger[452]: rd.md=0: removing MD RAID activation Dec 16 13:07:07.035401 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:07:07.039570 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:07:07.123119 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:07:07.130966 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:07:07.230277 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Dec 16 13:07:07.254106 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 16 13:07:07.254469 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Dec 16 13:07:07.263269 kernel: libata version 3.00 loaded. Dec 16 13:07:07.269347 kernel: scsi host0: Virtio SCSI HBA Dec 16 13:07:07.277347 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 16 13:07:07.288651 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 13:07:07.288777 kernel: GPT:9289727 != 125829119 Dec 16 13:07:07.288796 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 13:07:07.288815 kernel: GPT:9289727 != 125829119 Dec 16 13:07:07.288863 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 13:07:07.288880 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:07:07.288897 kernel: scsi host1: ata_piix Dec 16 13:07:07.296370 kernel: scsi host2: ata_piix Dec 16 13:07:07.311990 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Dec 16 13:07:07.312018 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Dec 16 13:07:07.312030 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Dec 16 13:07:07.313082 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Dec 16 13:07:07.322298 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 13:07:07.336340 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:07:07.355332 kernel: ACPI: bus type USB registered Dec 16 13:07:07.361061 kernel: usbcore: registered new interface driver usbfs Dec 16 13:07:07.361149 kernel: usbcore: registered new interface driver hub Dec 16 13:07:07.361162 kernel: usbcore: registered new device driver usb Dec 16 13:07:07.358587 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:07:07.358794 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:07:07.365512 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:07:07.369217 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:07:07.370906 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:07:07.529778 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:07:07.538482 kernel: AES CTR mode by8 optimization enabled Dec 16 13:07:07.614738 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 13:07:07.633354 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 16 13:07:07.633783 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 16 13:07:07.633749 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 13:07:07.641731 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 16 13:07:07.642052 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Dec 16 13:07:07.642268 kernel: hub 1-0:1.0: USB hub found Dec 16 13:07:07.644279 kernel: hub 1-0:1.0: 2 ports detected Dec 16 13:07:07.652536 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 13:07:07.662201 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 13:07:07.664031 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 16 13:07:07.666142 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:07:07.669617 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:07:07.670519 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:07:07.672794 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:07:07.675952 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:07:07.678462 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:07:07.701426 disk-uuid[597]: Primary Header is updated. Dec 16 13:07:07.701426 disk-uuid[597]: Secondary Entries is updated. Dec 16 13:07:07.701426 disk-uuid[597]: Secondary Header is updated. Dec 16 13:07:07.709487 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:07:07.713705 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:07:07.717156 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:07:08.722290 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:07:08.724985 disk-uuid[600]: The operation has completed successfully. Dec 16 13:07:08.785111 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:07:08.785303 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:07:08.839092 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:07:08.874549 sh[616]: Success Dec 16 13:07:08.899993 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:07:08.900384 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:07:08.900414 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:07:08.915322 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Dec 16 13:07:08.965486 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:07:08.967566 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:07:08.981045 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:07:08.995303 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (628) Dec 16 13:07:08.995378 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:07:08.998700 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:07:09.011832 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:07:09.011914 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:07:09.014788 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:07:09.016540 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:07:09.018109 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:07:09.020297 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:07:09.024456 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:07:09.056283 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (659) Dec 16 13:07:09.060500 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:07:09.063275 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:07:09.069330 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:07:09.069425 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:07:09.077342 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:07:09.078360 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:07:09.081371 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:07:09.188506 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:07:09.193511 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:07:09.238636 systemd-networkd[797]: lo: Link UP Dec 16 13:07:09.238649 systemd-networkd[797]: lo: Gained carrier Dec 16 13:07:09.245795 systemd-networkd[797]: Enumeration completed Dec 16 13:07:09.246216 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 16 13:07:09.246221 systemd-networkd[797]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Dec 16 13:07:09.246730 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:07:09.248816 systemd[1]: Reached target network.target - Network. Dec 16 13:07:09.249782 systemd-networkd[797]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:07:09.249786 systemd-networkd[797]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:07:09.251220 systemd-networkd[797]: eth0: Link UP Dec 16 13:07:09.251452 systemd-networkd[797]: eth1: Link UP Dec 16 13:07:09.251626 systemd-networkd[797]: eth0: Gained carrier Dec 16 13:07:09.251640 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 16 13:07:09.261208 systemd-networkd[797]: eth1: Gained carrier Dec 16 13:07:09.261284 systemd-networkd[797]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:07:09.279904 systemd-networkd[797]: eth0: DHCPv4 address 143.198.151.179/20, gateway 143.198.144.1 acquired from 169.254.169.253 Dec 16 13:07:09.287385 systemd-networkd[797]: eth1: DHCPv4 address 10.124.0.17/20 acquired from 169.254.169.253 Dec 16 13:07:09.361427 ignition[709]: Ignition 2.22.0 Dec 16 13:07:09.361444 ignition[709]: Stage: fetch-offline Dec 16 13:07:09.361494 ignition[709]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:07:09.361505 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 13:07:09.365950 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:07:09.361623 ignition[709]: parsed url from cmdline: "" Dec 16 13:07:09.361627 ignition[709]: no config URL provided Dec 16 13:07:09.361633 ignition[709]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:07:09.369454 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:07:09.361640 ignition[709]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:07:09.361646 ignition[709]: failed to fetch config: resource requires networking Dec 16 13:07:09.362549 ignition[709]: Ignition finished successfully Dec 16 13:07:09.412519 ignition[808]: Ignition 2.22.0 Dec 16 13:07:09.412607 ignition[808]: Stage: fetch Dec 16 13:07:09.412942 ignition[808]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:07:09.412963 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 13:07:09.413144 ignition[808]: parsed url from cmdline: "" Dec 16 13:07:09.413150 ignition[808]: no config URL provided Dec 16 13:07:09.413158 ignition[808]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:07:09.413172 ignition[808]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:07:09.413216 ignition[808]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Dec 16 13:07:09.459777 ignition[808]: GET result: OK Dec 16 13:07:09.460209 ignition[808]: parsing config with SHA512: a7d1674915bb4708e9783308fc8c20101ef64f4532a964cd99aa143bd7fc0d6de46846fea46bbfc86df1105489e3af00f357f0b96d9e4713aa386af937b85d08 Dec 16 13:07:09.471989 unknown[808]: fetched base config from "system" Dec 16 13:07:09.473139 ignition[808]: fetch: fetch complete Dec 16 13:07:09.472040 unknown[808]: fetched base config from "system" Dec 16 13:07:09.473173 ignition[808]: fetch: fetch passed Dec 16 13:07:09.472053 unknown[808]: fetched user config from "digitalocean" Dec 16 13:07:09.473300 ignition[808]: Ignition finished successfully Dec 16 13:07:09.477792 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:07:09.481687 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:07:09.539832 ignition[815]: Ignition 2.22.0 Dec 16 13:07:09.539857 ignition[815]: Stage: kargs Dec 16 13:07:09.540128 ignition[815]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:07:09.540150 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 13:07:09.541560 ignition[815]: kargs: kargs passed Dec 16 13:07:09.544687 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:07:09.541653 ignition[815]: Ignition finished successfully Dec 16 13:07:09.547903 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:07:09.589180 ignition[821]: Ignition 2.22.0 Dec 16 13:07:09.589206 ignition[821]: Stage: disks Dec 16 13:07:09.589473 ignition[821]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:07:09.589489 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 13:07:09.591162 ignition[821]: disks: disks passed Dec 16 13:07:09.591228 ignition[821]: Ignition finished successfully Dec 16 13:07:09.594031 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:07:09.595745 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:07:09.597110 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:07:09.598954 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:07:09.600682 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:07:09.601379 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:07:09.605442 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:07:09.645778 systemd-fsck[829]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 13:07:09.649652 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:07:09.657687 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:07:09.814271 kernel: EXT4-fs (vda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:07:09.815594 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:07:09.817375 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:07:09.821717 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:07:09.825312 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:07:09.840872 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Dec 16 13:07:09.845534 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 13:07:09.848948 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:07:09.850618 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:07:09.859292 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (837) Dec 16 13:07:09.865877 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:07:09.865952 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:07:09.869839 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:07:09.880682 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:07:09.880723 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:07:09.882936 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:07:09.891469 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:07:09.966021 coreos-metadata[839]: Dec 16 13:07:09.965 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 16 13:07:09.978491 coreos-metadata[839]: Dec 16 13:07:09.978 INFO Fetch successful Dec 16 13:07:09.989656 coreos-metadata[840]: Dec 16 13:07:09.989 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 16 13:07:09.990726 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Dec 16 13:07:09.990878 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Dec 16 13:07:09.994739 initrd-setup-root[867]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:07:10.002007 initrd-setup-root[875]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:07:10.005265 coreos-metadata[840]: Dec 16 13:07:10.004 INFO Fetch successful Dec 16 13:07:10.010727 coreos-metadata[840]: Dec 16 13:07:10.010 INFO wrote hostname ci-4459.2.2-e-d5fd5cf192 to /sysroot/etc/hostname Dec 16 13:07:10.013310 initrd-setup-root[882]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:07:10.013939 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:07:10.021760 initrd-setup-root[890]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:07:10.155179 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:07:10.157928 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:07:10.160463 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:07:10.178364 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:07:10.181399 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:07:10.203641 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:07:10.234337 ignition[958]: INFO : Ignition 2.22.0 Dec 16 13:07:10.235360 ignition[958]: INFO : Stage: mount Dec 16 13:07:10.235972 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:07:10.235972 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 13:07:10.237914 ignition[958]: INFO : mount: mount passed Dec 16 13:07:10.237914 ignition[958]: INFO : Ignition finished successfully Dec 16 13:07:10.239941 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:07:10.242403 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:07:10.273320 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:07:10.306301 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (969) Dec 16 13:07:10.309951 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:07:10.310035 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:07:10.318197 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:07:10.318296 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:07:10.320854 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:07:10.374604 ignition[986]: INFO : Ignition 2.22.0 Dec 16 13:07:10.374604 ignition[986]: INFO : Stage: files Dec 16 13:07:10.376368 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:07:10.376368 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 13:07:10.376368 ignition[986]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:07:10.379154 ignition[986]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:07:10.379154 ignition[986]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:07:10.381236 ignition[986]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:07:10.381236 ignition[986]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:07:10.383235 ignition[986]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:07:10.383235 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:07:10.383235 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 13:07:10.381667 unknown[986]: wrote ssh authorized keys file for user: core Dec 16 13:07:10.529593 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:07:10.569380 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:07:10.569380 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:07:10.573702 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:07:10.573702 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:07:10.573702 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:07:10.573702 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:07:10.573702 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:07:10.573702 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:07:10.573702 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:07:10.573702 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:07:10.594692 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:07:10.594692 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:07:10.594692 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:07:10.594692 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:07:10.594692 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 16 13:07:10.855364 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 13:07:10.922440 systemd-networkd[797]: eth0: Gained IPv6LL Dec 16 13:07:11.178437 systemd-networkd[797]: eth1: Gained IPv6LL Dec 16 13:07:11.212888 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:07:11.212888 ignition[986]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 13:07:11.216497 ignition[986]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:07:11.220217 ignition[986]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:07:11.220217 ignition[986]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 13:07:11.220217 ignition[986]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:07:11.220217 ignition[986]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:07:11.225163 ignition[986]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:07:11.225163 ignition[986]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:07:11.225163 ignition[986]: INFO : files: files passed Dec 16 13:07:11.225163 ignition[986]: INFO : Ignition finished successfully Dec 16 13:07:11.225884 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:07:11.229462 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:07:11.234507 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:07:11.250853 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:07:11.251637 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:07:11.263375 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:07:11.263375 initrd-setup-root-after-ignition[1016]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:07:11.267214 initrd-setup-root-after-ignition[1020]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:07:11.268372 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:07:11.270183 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:07:11.273030 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:07:11.333639 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:07:11.333781 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:07:11.335495 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:07:11.337740 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:07:11.339489 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:07:11.341437 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:07:11.369131 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:07:11.372364 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:07:11.396583 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:07:11.397596 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:07:11.399682 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:07:11.401551 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:07:11.401857 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:07:11.403698 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:07:11.404901 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:07:11.406572 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:07:11.408097 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:07:11.409731 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:07:11.411518 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:07:11.413349 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:07:11.415169 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:07:11.417179 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:07:11.419042 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:07:11.420849 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:07:11.422473 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:07:11.422697 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:07:11.424705 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:07:11.425732 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:07:11.427598 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:07:11.428048 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:07:11.429570 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:07:11.429750 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:07:11.432162 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:07:11.432487 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:07:11.434582 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:07:11.434740 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:07:11.436827 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 13:07:11.437036 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:07:11.440511 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:07:11.441446 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:07:11.442424 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:07:11.446486 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:07:11.449130 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:07:11.449443 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:07:11.453676 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:07:11.453810 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:07:11.465642 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:07:11.465746 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:07:11.486893 ignition[1040]: INFO : Ignition 2.22.0 Dec 16 13:07:11.486893 ignition[1040]: INFO : Stage: umount Dec 16 13:07:11.491367 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:07:11.491367 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 13:07:11.491367 ignition[1040]: INFO : umount: umount passed Dec 16 13:07:11.491367 ignition[1040]: INFO : Ignition finished successfully Dec 16 13:07:11.498409 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:07:11.498614 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:07:11.503285 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:07:11.503953 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:07:11.504037 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:07:11.560215 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:07:11.561226 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:07:11.561965 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:07:11.562012 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:07:11.563516 systemd[1]: Stopped target network.target - Network. Dec 16 13:07:11.565151 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:07:11.565302 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:07:11.566961 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:07:11.585948 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:07:11.591413 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:07:11.593383 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:07:11.594111 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:07:11.595879 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:07:11.595950 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:07:11.597690 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:07:11.597731 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:07:11.599035 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:07:11.599111 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:07:11.600522 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:07:11.600571 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:07:11.602340 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:07:11.603848 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:07:11.606614 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:07:11.606754 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:07:11.608007 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:07:11.608166 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:07:11.610070 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:07:11.610200 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:07:11.617068 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:07:11.617922 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:07:11.618114 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:07:11.621265 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:07:11.622724 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:07:11.624729 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:07:11.624799 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:07:11.628409 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:07:11.630364 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:07:11.630465 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:07:11.633973 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:07:11.634067 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:07:11.638025 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:07:11.638108 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:07:11.639049 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:07:11.639115 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:07:11.641439 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:07:11.648898 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:07:11.649055 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:07:11.659806 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:07:11.664456 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:07:11.667215 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:07:11.667422 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:07:11.669377 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:07:11.669438 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:07:11.671071 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:07:11.671137 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:07:11.673635 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:07:11.673714 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:07:11.675345 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:07:11.675441 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:07:11.678555 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:07:11.680808 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:07:11.680921 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:07:11.683942 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:07:11.684029 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:07:11.685166 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:07:11.685265 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:07:11.689555 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 13:07:11.689652 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 13:07:11.689712 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:07:11.690180 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:07:11.690400 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:07:11.707623 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:07:11.707806 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:07:11.710112 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:07:11.721466 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:07:11.743638 systemd[1]: Switching root. Dec 16 13:07:11.825644 systemd-journald[193]: Journal stopped Dec 16 13:07:13.573136 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Dec 16 13:07:13.573231 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:07:13.573329 kernel: SELinux: policy capability open_perms=1 Dec 16 13:07:13.573386 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:07:13.573399 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:07:13.573425 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:07:13.573462 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:07:13.573480 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:07:13.573499 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:07:13.573518 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:07:13.573531 kernel: audit: type=1403 audit(1765890432.011:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:07:13.573547 systemd[1]: Successfully loaded SELinux policy in 92.796ms. Dec 16 13:07:13.573577 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.742ms. Dec 16 13:07:13.573592 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:07:13.573614 systemd[1]: Detected virtualization kvm. Dec 16 13:07:13.573626 systemd[1]: Detected architecture x86-64. Dec 16 13:07:13.573638 systemd[1]: Detected first boot. Dec 16 13:07:13.573650 systemd[1]: Hostname set to . Dec 16 13:07:13.573663 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:07:13.573675 zram_generator::config[1085]: No configuration found. Dec 16 13:07:13.573689 kernel: Guest personality initialized and is inactive Dec 16 13:07:13.573700 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 13:07:13.573712 kernel: Initialized host personality Dec 16 13:07:13.573731 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:07:13.573752 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:07:13.573774 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:07:13.573786 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:07:13.573798 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:07:13.573810 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:07:13.573826 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:07:13.573838 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:07:13.573858 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:07:13.573870 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:07:13.573887 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:07:13.573907 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:07:13.573919 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:07:13.573930 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:07:13.573942 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:07:13.573954 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:07:13.573967 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:07:13.573986 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:07:13.573999 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:07:13.574011 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:07:13.574023 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:07:13.574035 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:07:13.574067 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:07:13.574086 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:07:13.574098 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:07:13.574110 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:07:13.574122 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:07:13.574134 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:07:13.574146 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:07:13.574158 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:07:13.574173 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:07:13.574192 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:07:13.574211 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:07:13.574465 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:07:13.574503 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:07:13.574521 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:07:13.574541 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:07:13.574554 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:07:13.574567 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:07:13.574585 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:07:13.574605 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:07:13.574617 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:07:13.574642 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:07:13.574655 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:07:13.574670 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:07:13.574694 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:07:13.574706 systemd[1]: Reached target machines.target - Containers. Dec 16 13:07:13.574718 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:07:13.574730 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:07:13.574743 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:07:13.574772 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:07:13.574791 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:07:13.574810 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:07:13.574826 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:07:13.574840 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:07:13.574858 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:07:13.574887 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:07:13.574901 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:07:13.574924 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:07:13.574943 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:07:13.574962 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:07:13.574983 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:07:13.575003 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:07:13.575038 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:07:13.575063 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:07:13.575086 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:07:13.575099 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:07:13.575111 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:07:13.575124 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:07:13.575144 systemd[1]: Stopped verity-setup.service. Dec 16 13:07:13.575157 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:07:13.575170 kernel: ACPI: bus type drm_connector registered Dec 16 13:07:13.575203 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:07:13.575215 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:07:13.575227 kernel: loop: module loaded Dec 16 13:07:13.575258 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:07:13.575327 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:07:13.575358 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:07:13.575376 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:07:13.575394 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:07:13.575411 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:07:13.575429 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:07:13.575448 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:07:13.575465 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:07:13.577422 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:07:13.577461 kernel: fuse: init (API version 7.41) Dec 16 13:07:13.577506 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:07:13.577527 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:07:13.577546 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:07:13.577567 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:07:13.577580 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:07:13.577599 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:07:13.577620 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:07:13.577639 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:07:13.577654 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:07:13.577676 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:07:13.577699 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:07:13.577712 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:07:13.577801 systemd-journald[1162]: Collecting audit messages is disabled. Dec 16 13:07:13.577848 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:07:13.577865 systemd-journald[1162]: Journal started Dec 16 13:07:13.577897 systemd-journald[1162]: Runtime Journal (/run/log/journal/a4a204eade1c42788084e40f8be92135) is 4.9M, max 39.2M, 34.3M free. Dec 16 13:07:12.961927 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:07:12.985910 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 13:07:12.986626 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:07:13.582323 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:07:13.585267 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:07:13.590333 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:07:13.596305 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:07:13.600317 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:07:13.613335 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:07:13.618331 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:07:13.624383 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:07:13.631359 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:07:13.638145 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:07:13.653661 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:07:13.663289 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:07:13.676339 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:07:13.677941 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:07:13.679645 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:07:13.690075 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:07:13.692045 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:07:13.693532 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:07:13.721883 kernel: loop0: detected capacity change from 0 to 219144 Dec 16 13:07:13.732355 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:07:13.753835 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:07:13.762278 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:07:13.772288 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:07:13.776827 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:07:13.781779 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:07:13.790293 kernel: loop1: detected capacity change from 0 to 128560 Dec 16 13:07:13.825016 systemd-journald[1162]: Time spent on flushing to /var/log/journal/a4a204eade1c42788084e40f8be92135 is 92.856ms for 1019 entries. Dec 16 13:07:13.825016 systemd-journald[1162]: System Journal (/var/log/journal/a4a204eade1c42788084e40f8be92135) is 8M, max 195.6M, 187.6M free. Dec 16 13:07:13.939500 systemd-journald[1162]: Received client request to flush runtime journal. Dec 16 13:07:13.939574 kernel: loop2: detected capacity change from 0 to 8 Dec 16 13:07:13.939599 kernel: loop3: detected capacity change from 0 to 110984 Dec 16 13:07:13.864263 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:07:13.943980 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:07:13.959307 kernel: loop4: detected capacity change from 0 to 219144 Dec 16 13:07:13.968134 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:07:13.974554 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:07:13.989925 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:07:14.025088 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Dec 16 13:07:14.025122 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Dec 16 13:07:14.041369 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:07:14.046296 kernel: loop5: detected capacity change from 0 to 128560 Dec 16 13:07:14.078424 kernel: loop6: detected capacity change from 0 to 8 Dec 16 13:07:14.092748 kernel: loop7: detected capacity change from 0 to 110984 Dec 16 13:07:14.110998 (sd-merge)[1230]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Dec 16 13:07:14.111849 (sd-merge)[1230]: Merged extensions into '/usr'. Dec 16 13:07:14.125468 systemd[1]: Reload requested from client PID 1191 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:07:14.125498 systemd[1]: Reloading... Dec 16 13:07:14.366295 zram_generator::config[1265]: No configuration found. Dec 16 13:07:14.629522 ldconfig[1187]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:07:14.795467 systemd[1]: Reloading finished in 669 ms. Dec 16 13:07:14.813150 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:07:14.815954 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:07:14.830504 systemd[1]: Starting ensure-sysext.service... Dec 16 13:07:14.834530 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:07:14.879780 systemd[1]: Reload requested from client PID 1303 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:07:14.879811 systemd[1]: Reloading... Dec 16 13:07:14.890666 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:07:14.891450 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:07:14.894418 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:07:14.896472 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:07:14.899046 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:07:14.901853 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Dec 16 13:07:14.904461 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Dec 16 13:07:14.914289 systemd-tmpfiles[1304]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:07:14.914306 systemd-tmpfiles[1304]: Skipping /boot Dec 16 13:07:14.930868 systemd-tmpfiles[1304]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:07:14.930884 systemd-tmpfiles[1304]: Skipping /boot Dec 16 13:07:14.986297 zram_generator::config[1330]: No configuration found. Dec 16 13:07:15.225398 systemd[1]: Reloading finished in 344 ms. Dec 16 13:07:15.251264 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:07:15.262967 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:07:15.273990 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:07:15.278580 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:07:15.282879 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:07:15.299611 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:07:15.305743 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:07:15.319279 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:07:15.325113 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:07:15.325377 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:07:15.329590 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:07:15.335730 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:07:15.347647 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:07:15.349368 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:07:15.349573 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:07:15.349714 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:07:15.356896 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:07:15.371817 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:07:15.372344 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:07:15.377613 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:07:15.378266 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:07:15.381078 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:07:15.381794 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:07:15.395992 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:07:15.398742 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:07:15.401110 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:07:15.405322 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:07:15.408999 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:07:15.414361 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:07:15.417742 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:07:15.418954 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:07:15.419435 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:07:15.422501 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:07:15.424432 systemd-udevd[1381]: Using default interface naming scheme 'v255'. Dec 16 13:07:15.428626 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:07:15.430379 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:07:15.433390 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:07:15.439836 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:07:15.443386 systemd[1]: Finished ensure-sysext.service. Dec 16 13:07:15.457049 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 13:07:15.467720 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:07:15.483994 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:07:15.498631 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:07:15.531111 augenrules[1435]: No rules Dec 16 13:07:15.530747 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:07:15.532460 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:07:15.535951 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:07:15.537911 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:07:15.563115 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:07:15.563473 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:07:15.567044 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:07:15.569371 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:07:15.574947 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:07:15.575334 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:07:15.581586 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:07:15.581889 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:07:15.628791 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:07:15.767279 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Dec 16 13:07:15.770496 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Dec 16 13:07:15.772315 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:07:15.772451 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:07:15.774064 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:07:15.778620 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:07:15.785157 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:07:15.786134 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:07:15.786173 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:07:15.786206 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:07:15.786225 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:07:15.812766 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:07:15.815498 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:07:15.821865 kernel: ISO 9660 Extensions: RRIP_1991A Dec 16 13:07:15.824829 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:07:15.832457 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:07:15.832786 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:07:15.834702 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:07:15.838817 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:07:15.839930 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:07:15.842837 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:07:15.865344 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Dec 16 13:07:15.881589 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 13:07:15.902372 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:07:15.934306 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:07:15.942556 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:07:16.004288 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 13:07:16.061582 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 16 13:07:16.062064 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 13:07:16.099273 kernel: ACPI: button: Power Button [PWRF] Dec 16 13:07:16.127978 systemd-networkd[1426]: lo: Link UP Dec 16 13:07:16.127994 systemd-networkd[1426]: lo: Gained carrier Dec 16 13:07:16.131155 systemd-networkd[1426]: Enumeration completed Dec 16 13:07:16.131383 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:07:16.131643 systemd-networkd[1426]: eth0: Configuring with /run/systemd/network/10-7a:94:43:7f:1b:d2.network. Dec 16 13:07:16.133668 systemd-networkd[1426]: eth1: Configuring with /run/systemd/network/10-9a:c1:19:ee:06:21.network. Dec 16 13:07:16.134311 systemd-networkd[1426]: eth0: Link UP Dec 16 13:07:16.134451 systemd-networkd[1426]: eth0: Gained carrier Dec 16 13:07:16.137152 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:07:16.138725 systemd-networkd[1426]: eth1: Link UP Dec 16 13:07:16.139469 systemd-networkd[1426]: eth1: Gained carrier Dec 16 13:07:16.143647 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:07:16.201393 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 13:07:16.202402 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:07:16.224800 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:07:16.233662 systemd-resolved[1379]: Positive Trust Anchors: Dec 16 13:07:16.233680 systemd-resolved[1379]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:07:16.233716 systemd-resolved[1379]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:07:16.242786 systemd-resolved[1379]: Using system hostname 'ci-4459.2.2-e-d5fd5cf192'. Dec 16 13:07:16.244829 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:07:16.245913 systemd[1]: Reached target network.target - Network. Dec 16 13:07:16.247411 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:07:16.248342 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:07:16.250461 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:07:16.251268 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:07:16.251969 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:07:16.253578 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:07:16.256625 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:07:16.257602 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:07:16.258432 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:07:16.258471 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:07:16.259089 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:07:16.261781 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:07:16.266807 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:07:16.273106 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:07:16.275483 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:07:16.277610 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:07:16.288258 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:07:16.290623 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:07:16.293214 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:07:16.297529 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:07:16.298331 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:07:16.300421 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:07:16.300454 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:07:16.301797 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:07:16.308583 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:07:16.315553 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:07:16.321568 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:07:16.326407 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:07:16.332840 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:07:16.334753 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:07:16.346034 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 16 13:07:16.341901 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:07:16.349609 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:07:16.352541 jq[1511]: false Dec 16 13:07:16.358302 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 16 13:07:16.367999 kernel: Console: switching to colour dummy device 80x25 Dec 16 13:07:16.371786 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 16 13:07:16.371879 kernel: [drm] features: -context_init Dec 16 13:07:16.376463 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:07:16.376652 oslogin_cache_refresh[1513]: Refreshing passwd entry cache Dec 16 13:07:16.379675 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Refreshing passwd entry cache Dec 16 13:07:16.382396 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Failure getting users, quitting Dec 16 13:07:16.382396 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:07:16.382396 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Refreshing group entry cache Dec 16 13:07:16.382396 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Failure getting groups, quitting Dec 16 13:07:16.382396 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:07:16.379912 oslogin_cache_refresh[1513]: Failure getting users, quitting Dec 16 13:07:16.379941 oslogin_cache_refresh[1513]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:07:16.380014 oslogin_cache_refresh[1513]: Refreshing group entry cache Dec 16 13:07:16.381177 oslogin_cache_refresh[1513]: Failure getting groups, quitting Dec 16 13:07:16.381195 oslogin_cache_refresh[1513]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:07:16.386113 kernel: [drm] number of scanouts: 1 Dec 16 13:07:16.386192 kernel: [drm] number of cap sets: 0 Dec 16 13:07:16.386225 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Dec 16 13:07:16.393937 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:07:16.410769 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:07:16.417953 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:07:16.419091 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:07:16.420764 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:07:16.425645 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:07:16.433837 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:07:16.437706 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:07:16.438193 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:07:16.438476 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:07:16.438854 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:07:16.444774 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:07:16.492315 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 16 13:07:16.492422 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 13:07:16.492733 update_engine[1526]: I20251216 13:07:16.492606 1526 main.cc:92] Flatcar Update Engine starting Dec 16 13:07:16.521650 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 16 13:07:16.521414 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:07:16.521932 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:07:16.532917 extend-filesystems[1512]: Found /dev/vda6 Dec 16 13:07:16.543863 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:07:17.211708 systemd-resolved[1379]: Clock change detected. Flushing caches. Dec 16 13:07:17.211724 systemd-timesyncd[1415]: Contacted time server 23.186.168.132:123 (0.flatcar.pool.ntp.org). Dec 16 13:07:17.211807 systemd-timesyncd[1415]: Initial clock synchronization to Tue 2025-12-16 13:07:17.211554 UTC. Dec 16 13:07:17.214285 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:07:17.215837 (ntainerd)[1540]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:07:17.242179 tar[1531]: linux-amd64/LICENSE Dec 16 13:07:17.242179 tar[1531]: linux-amd64/helm Dec 16 13:07:17.244370 extend-filesystems[1512]: Found /dev/vda9 Dec 16 13:07:17.253419 jq[1528]: true Dec 16 13:07:17.266212 extend-filesystems[1512]: Checking size of /dev/vda9 Dec 16 13:07:17.272679 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:07:17.272335 dbus-daemon[1509]: [system] SELinux support is enabled Dec 16 13:07:17.281602 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:07:17.281660 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:07:17.281882 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:07:17.282034 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Dec 16 13:07:17.282062 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:07:17.294205 coreos-metadata[1508]: Dec 16 13:07:17.292 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 16 13:07:17.310844 coreos-metadata[1508]: Dec 16 13:07:17.310 INFO Fetch successful Dec 16 13:07:17.331263 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:07:17.331953 update_engine[1526]: I20251216 13:07:17.331489 1526 update_check_scheduler.cc:74] Next update check in 7m9s Dec 16 13:07:17.336172 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:07:17.354186 jq[1550]: true Dec 16 13:07:17.416505 extend-filesystems[1512]: Resized partition /dev/vda9 Dec 16 13:07:17.430388 extend-filesystems[1563]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:07:17.441134 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Dec 16 13:07:17.441030 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:07:17.441503 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:07:17.549261 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:07:17.625365 bash[1578]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:07:17.628271 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:07:17.631755 systemd[1]: Starting sshkeys.service... Dec 16 13:07:17.670881 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 16 13:07:17.697889 extend-filesystems[1563]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 13:07:17.697889 extend-filesystems[1563]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 16 13:07:17.697889 extend-filesystems[1563]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 16 13:07:17.706266 extend-filesystems[1512]: Resized filesystem in /dev/vda9 Dec 16 13:07:17.700979 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:07:17.701268 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:07:17.714192 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 13:07:17.714292 systemd-logind[1522]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:07:17.720631 systemd-logind[1522]: New seat seat0. Dec 16 13:07:17.729917 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 13:07:17.756899 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:07:17.772339 systemd-logind[1522]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 13:07:17.788640 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:07:17.788989 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:07:17.801050 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:07:17.810548 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:07:17.939684 coreos-metadata[1591]: Dec 16 13:07:17.938 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 16 13:07:17.941697 locksmithd[1554]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:07:17.954043 coreos-metadata[1591]: Dec 16 13:07:17.952 INFO Fetch successful Dec 16 13:07:17.993303 unknown[1591]: wrote ssh authorized keys file for user: core Dec 16 13:07:18.003603 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:07:18.039513 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:07:18.040006 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:07:18.044760 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:07:18.054648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:07:18.061967 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:07:18.079775 containerd[1540]: time="2025-12-16T13:07:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:07:18.083394 containerd[1540]: time="2025-12-16T13:07:18.082087156Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:07:18.106072 update-ssh-keys[1606]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:07:18.108499 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 13:07:18.119654 systemd[1]: Finished sshkeys.service. Dec 16 13:07:18.143404 containerd[1540]: time="2025-12-16T13:07:18.141780864Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.349µs" Dec 16 13:07:18.143404 containerd[1540]: time="2025-12-16T13:07:18.141833396Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:07:18.143404 containerd[1540]: time="2025-12-16T13:07:18.141854383Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:07:18.143404 containerd[1540]: time="2025-12-16T13:07:18.142113543Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:07:18.143404 containerd[1540]: time="2025-12-16T13:07:18.142136556Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:07:18.143404 containerd[1540]: time="2025-12-16T13:07:18.142167376Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:07:18.143404 containerd[1540]: time="2025-12-16T13:07:18.142226711Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:07:18.143404 containerd[1540]: time="2025-12-16T13:07:18.142238783Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:07:18.144077 containerd[1540]: time="2025-12-16T13:07:18.144031635Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:07:18.144159 containerd[1540]: time="2025-12-16T13:07:18.144147510Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:07:18.144226 containerd[1540]: time="2025-12-16T13:07:18.144212161Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:07:18.144284 containerd[1540]: time="2025-12-16T13:07:18.144273319Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:07:18.144522 containerd[1540]: time="2025-12-16T13:07:18.144495178Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:07:18.147576 containerd[1540]: time="2025-12-16T13:07:18.146849737Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:07:18.147576 containerd[1540]: time="2025-12-16T13:07:18.146925137Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:07:18.147576 containerd[1540]: time="2025-12-16T13:07:18.146946876Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:07:18.147576 containerd[1540]: time="2025-12-16T13:07:18.147014008Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:07:18.147922 containerd[1540]: time="2025-12-16T13:07:18.147896106Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:07:18.148113 containerd[1540]: time="2025-12-16T13:07:18.148087301Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:07:18.175648 containerd[1540]: time="2025-12-16T13:07:18.175581041Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:07:18.178377 containerd[1540]: time="2025-12-16T13:07:18.177516443Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:07:18.178377 containerd[1540]: time="2025-12-16T13:07:18.177587072Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:07:18.178377 containerd[1540]: time="2025-12-16T13:07:18.177661543Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:07:18.178377 containerd[1540]: time="2025-12-16T13:07:18.177676927Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:07:18.178377 containerd[1540]: time="2025-12-16T13:07:18.177703391Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:07:18.178377 containerd[1540]: time="2025-12-16T13:07:18.177715429Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:07:18.178377 containerd[1540]: time="2025-12-16T13:07:18.177726999Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:07:18.178377 containerd[1540]: time="2025-12-16T13:07:18.177740687Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:07:18.178377 containerd[1540]: time="2025-12-16T13:07:18.177752713Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:07:18.178377 containerd[1540]: time="2025-12-16T13:07:18.177777662Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:07:18.178377 containerd[1540]: time="2025-12-16T13:07:18.177790907Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:07:18.178377 containerd[1540]: time="2025-12-16T13:07:18.177985964Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:07:18.178377 containerd[1540]: time="2025-12-16T13:07:18.178033653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:07:18.178377 containerd[1540]: time="2025-12-16T13:07:18.178056461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:07:18.178838 containerd[1540]: time="2025-12-16T13:07:18.178068243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:07:18.178838 containerd[1540]: time="2025-12-16T13:07:18.178190026Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:07:18.178838 containerd[1540]: time="2025-12-16T13:07:18.178208068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:07:18.178838 containerd[1540]: time="2025-12-16T13:07:18.178230323Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:07:18.178838 containerd[1540]: time="2025-12-16T13:07:18.178277129Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:07:18.178838 containerd[1540]: time="2025-12-16T13:07:18.178296297Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:07:18.178838 containerd[1540]: time="2025-12-16T13:07:18.178307694Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:07:18.179453 containerd[1540]: time="2025-12-16T13:07:18.178319826Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:07:18.179453 containerd[1540]: time="2025-12-16T13:07:18.179152602Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:07:18.179453 containerd[1540]: time="2025-12-16T13:07:18.179205687Z" level=info msg="Start snapshots syncer" Dec 16 13:07:18.179453 containerd[1540]: time="2025-12-16T13:07:18.179294879Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:07:18.180066 containerd[1540]: time="2025-12-16T13:07:18.180026997Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:07:18.180296 containerd[1540]: time="2025-12-16T13:07:18.180277924Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:07:18.180574 containerd[1540]: time="2025-12-16T13:07:18.180551220Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:07:18.180867 containerd[1540]: time="2025-12-16T13:07:18.180846415Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:07:18.181060 containerd[1540]: time="2025-12-16T13:07:18.181038273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:07:18.181172 containerd[1540]: time="2025-12-16T13:07:18.181157789Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:07:18.181387 containerd[1540]: time="2025-12-16T13:07:18.181337776Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:07:18.181467 containerd[1540]: time="2025-12-16T13:07:18.181456198Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:07:18.181561 containerd[1540]: time="2025-12-16T13:07:18.181546087Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:07:18.182900 containerd[1540]: time="2025-12-16T13:07:18.182446154Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:07:18.182900 containerd[1540]: time="2025-12-16T13:07:18.182532415Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:07:18.182900 containerd[1540]: time="2025-12-16T13:07:18.182602874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:07:18.182900 containerd[1540]: time="2025-12-16T13:07:18.182621624Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:07:18.182900 containerd[1540]: time="2025-12-16T13:07:18.182698823Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:07:18.182900 containerd[1540]: time="2025-12-16T13:07:18.182720533Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:07:18.182900 containerd[1540]: time="2025-12-16T13:07:18.182755690Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:07:18.182900 containerd[1540]: time="2025-12-16T13:07:18.182772927Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:07:18.182900 containerd[1540]: time="2025-12-16T13:07:18.182784162Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:07:18.184389 containerd[1540]: time="2025-12-16T13:07:18.182797317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:07:18.184389 containerd[1540]: time="2025-12-16T13:07:18.183297170Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:07:18.184389 containerd[1540]: time="2025-12-16T13:07:18.183335339Z" level=info msg="runtime interface created" Dec 16 13:07:18.184389 containerd[1540]: time="2025-12-16T13:07:18.183368103Z" level=info msg="created NRI interface" Dec 16 13:07:18.184389 containerd[1540]: time="2025-12-16T13:07:18.183380812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:07:18.184389 containerd[1540]: time="2025-12-16T13:07:18.183405727Z" level=info msg="Connect containerd service" Dec 16 13:07:18.184389 containerd[1540]: time="2025-12-16T13:07:18.183439009Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:07:18.185847 containerd[1540]: time="2025-12-16T13:07:18.185800759Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:07:18.245639 systemd-networkd[1426]: eth0: Gained IPv6LL Dec 16 13:07:18.254832 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:07:18.258406 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:07:18.268941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:07:18.276440 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:07:18.298500 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:07:18.409548 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:07:18.502268 systemd-networkd[1426]: eth1: Gained IPv6LL Dec 16 13:07:18.515382 kernel: EDAC MC: Ver: 3.0.0 Dec 16 13:07:18.585133 sshd_keygen[1545]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:07:18.662520 containerd[1540]: time="2025-12-16T13:07:18.661845346Z" level=info msg="Start subscribing containerd event" Dec 16 13:07:18.662520 containerd[1540]: time="2025-12-16T13:07:18.661932394Z" level=info msg="Start recovering state" Dec 16 13:07:18.662520 containerd[1540]: time="2025-12-16T13:07:18.662080186Z" level=info msg="Start event monitor" Dec 16 13:07:18.662520 containerd[1540]: time="2025-12-16T13:07:18.662096033Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:07:18.662520 containerd[1540]: time="2025-12-16T13:07:18.662105486Z" level=info msg="Start streaming server" Dec 16 13:07:18.662520 containerd[1540]: time="2025-12-16T13:07:18.662115673Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:07:18.662520 containerd[1540]: time="2025-12-16T13:07:18.662124253Z" level=info msg="runtime interface starting up..." Dec 16 13:07:18.662520 containerd[1540]: time="2025-12-16T13:07:18.662130598Z" level=info msg="starting plugins..." Dec 16 13:07:18.662520 containerd[1540]: time="2025-12-16T13:07:18.662144174Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:07:18.668478 containerd[1540]: time="2025-12-16T13:07:18.665304776Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:07:18.668478 containerd[1540]: time="2025-12-16T13:07:18.665553992Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:07:18.665902 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:07:18.669846 containerd[1540]: time="2025-12-16T13:07:18.669391306Z" level=info msg="containerd successfully booted in 0.589028s" Dec 16 13:07:18.697149 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:07:18.705210 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:07:18.766324 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:07:18.768242 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:07:18.777454 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:07:18.824981 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:07:18.835321 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:07:18.842159 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:07:18.845256 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:07:19.012007 tar[1531]: linux-amd64/README.md Dec 16 13:07:19.037535 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:07:19.809489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:19.812988 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:07:19.815490 systemd[1]: Startup finished in 4.224s (kernel) + 6.311s (initrd) + 7.227s (userspace) = 17.763s. Dec 16 13:07:19.821728 (kubelet)[1670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:07:20.424426 kubelet[1670]: E1216 13:07:20.423750 1670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:07:20.429942 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:07:20.430153 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:07:20.430542 systemd[1]: kubelet.service: Consumed 1.459s CPU time, 257.1M memory peak. Dec 16 13:07:20.433621 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:07:20.435785 systemd[1]: Started sshd@0-143.198.151.179:22-139.178.68.195:51594.service - OpenSSH per-connection server daemon (139.178.68.195:51594). Dec 16 13:07:20.539981 sshd[1682]: Accepted publickey for core from 139.178.68.195 port 51594 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:07:20.543165 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:20.554306 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:07:20.556051 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:07:20.570416 systemd-logind[1522]: New session 1 of user core. Dec 16 13:07:20.592457 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:07:20.597886 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:07:20.616146 (systemd)[1687]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:07:20.620388 systemd-logind[1522]: New session c1 of user core. Dec 16 13:07:20.783966 systemd[1687]: Queued start job for default target default.target. Dec 16 13:07:20.797300 systemd[1687]: Created slice app.slice - User Application Slice. Dec 16 13:07:20.797374 systemd[1687]: Reached target paths.target - Paths. Dec 16 13:07:20.797449 systemd[1687]: Reached target timers.target - Timers. Dec 16 13:07:20.799334 systemd[1687]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:07:20.819462 systemd[1687]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:07:20.819774 systemd[1687]: Reached target sockets.target - Sockets. Dec 16 13:07:20.819950 systemd[1687]: Reached target basic.target - Basic System. Dec 16 13:07:20.820451 systemd[1687]: Reached target default.target - Main User Target. Dec 16 13:07:20.820487 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:07:20.820889 systemd[1687]: Startup finished in 188ms. Dec 16 13:07:20.827807 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:07:20.896850 systemd[1]: Started sshd@1-143.198.151.179:22-139.178.68.195:51598.service - OpenSSH per-connection server daemon (139.178.68.195:51598). Dec 16 13:07:20.982708 sshd[1698]: Accepted publickey for core from 139.178.68.195 port 51598 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:07:20.985236 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:20.995097 systemd-logind[1522]: New session 2 of user core. Dec 16 13:07:20.999772 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:07:21.067300 sshd[1701]: Connection closed by 139.178.68.195 port 51598 Dec 16 13:07:21.067010 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:21.080297 systemd[1]: sshd@1-143.198.151.179:22-139.178.68.195:51598.service: Deactivated successfully. Dec 16 13:07:21.083114 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 13:07:21.085137 systemd-logind[1522]: Session 2 logged out. Waiting for processes to exit. Dec 16 13:07:21.088031 systemd[1]: Started sshd@2-143.198.151.179:22-139.178.68.195:51602.service - OpenSSH per-connection server daemon (139.178.68.195:51602). Dec 16 13:07:21.090742 systemd-logind[1522]: Removed session 2. Dec 16 13:07:21.162662 sshd[1707]: Accepted publickey for core from 139.178.68.195 port 51602 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:07:21.164498 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:21.170585 systemd-logind[1522]: New session 3 of user core. Dec 16 13:07:21.181765 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:07:21.239403 sshd[1710]: Connection closed by 139.178.68.195 port 51602 Dec 16 13:07:21.240001 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:21.255712 systemd[1]: sshd@2-143.198.151.179:22-139.178.68.195:51602.service: Deactivated successfully. Dec 16 13:07:21.258038 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 13:07:21.260503 systemd-logind[1522]: Session 3 logged out. Waiting for processes to exit. Dec 16 13:07:21.262610 systemd[1]: Started sshd@3-143.198.151.179:22-139.178.68.195:51618.service - OpenSSH per-connection server daemon (139.178.68.195:51618). Dec 16 13:07:21.264327 systemd-logind[1522]: Removed session 3. Dec 16 13:07:21.331719 sshd[1716]: Accepted publickey for core from 139.178.68.195 port 51618 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:07:21.333532 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:21.342157 systemd-logind[1522]: New session 4 of user core. Dec 16 13:07:21.348761 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:07:21.412930 sshd[1719]: Connection closed by 139.178.68.195 port 51618 Dec 16 13:07:21.413605 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:21.427589 systemd[1]: sshd@3-143.198.151.179:22-139.178.68.195:51618.service: Deactivated successfully. Dec 16 13:07:21.430630 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:07:21.432066 systemd-logind[1522]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:07:21.437696 systemd[1]: Started sshd@4-143.198.151.179:22-139.178.68.195:51630.service - OpenSSH per-connection server daemon (139.178.68.195:51630). Dec 16 13:07:21.439465 systemd-logind[1522]: Removed session 4. Dec 16 13:07:21.516939 sshd[1725]: Accepted publickey for core from 139.178.68.195 port 51630 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:07:21.519062 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:21.525671 systemd-logind[1522]: New session 5 of user core. Dec 16 13:07:21.535689 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:07:21.612277 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:07:21.613545 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:07:21.632265 sudo[1729]: pam_unix(sudo:session): session closed for user root Dec 16 13:07:21.637069 sshd[1728]: Connection closed by 139.178.68.195 port 51630 Dec 16 13:07:21.637779 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:21.651424 systemd[1]: sshd@4-143.198.151.179:22-139.178.68.195:51630.service: Deactivated successfully. Dec 16 13:07:21.654288 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:07:21.655832 systemd-logind[1522]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:07:21.660085 systemd[1]: Started sshd@5-143.198.151.179:22-139.178.68.195:51640.service - OpenSSH per-connection server daemon (139.178.68.195:51640). Dec 16 13:07:21.661234 systemd-logind[1522]: Removed session 5. Dec 16 13:07:21.732303 sshd[1735]: Accepted publickey for core from 139.178.68.195 port 51640 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:07:21.734017 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:21.741661 systemd-logind[1522]: New session 6 of user core. Dec 16 13:07:21.753812 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:07:21.819189 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:07:21.819640 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:07:21.828470 sudo[1740]: pam_unix(sudo:session): session closed for user root Dec 16 13:07:21.837551 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:07:21.837849 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:07:21.884199 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:07:21.943438 augenrules[1762]: No rules Dec 16 13:07:21.945372 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:07:21.945680 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:07:21.948017 sudo[1739]: pam_unix(sudo:session): session closed for user root Dec 16 13:07:21.953085 sshd[1738]: Connection closed by 139.178.68.195 port 51640 Dec 16 13:07:21.952487 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:21.962979 systemd[1]: sshd@5-143.198.151.179:22-139.178.68.195:51640.service: Deactivated successfully. Dec 16 13:07:21.965334 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:07:21.966459 systemd-logind[1522]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:07:21.970731 systemd[1]: Started sshd@6-143.198.151.179:22-139.178.68.195:51644.service - OpenSSH per-connection server daemon (139.178.68.195:51644). Dec 16 13:07:21.972529 systemd-logind[1522]: Removed session 6. Dec 16 13:07:22.044228 sshd[1771]: Accepted publickey for core from 139.178.68.195 port 51644 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:07:22.045909 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:22.052457 systemd-logind[1522]: New session 7 of user core. Dec 16 13:07:22.061682 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:07:22.122164 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:07:22.123153 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:07:22.773718 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:07:22.800096 (dockerd)[1793]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:07:23.265161 dockerd[1793]: time="2025-12-16T13:07:23.264977116Z" level=info msg="Starting up" Dec 16 13:07:23.269229 dockerd[1793]: time="2025-12-16T13:07:23.268527624Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:07:23.286798 dockerd[1793]: time="2025-12-16T13:07:23.286682191Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:07:23.314739 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2279861556-merged.mount: Deactivated successfully. Dec 16 13:07:23.403128 systemd[1]: var-lib-docker-metacopy\x2dcheck1319900016-merged.mount: Deactivated successfully. Dec 16 13:07:23.439782 dockerd[1793]: time="2025-12-16T13:07:23.439524754Z" level=info msg="Loading containers: start." Dec 16 13:07:23.456404 kernel: Initializing XFRM netlink socket Dec 16 13:07:23.860767 systemd-networkd[1426]: docker0: Link UP Dec 16 13:07:23.866520 dockerd[1793]: time="2025-12-16T13:07:23.866339185Z" level=info msg="Loading containers: done." Dec 16 13:07:23.891278 dockerd[1793]: time="2025-12-16T13:07:23.890697201Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:07:23.891278 dockerd[1793]: time="2025-12-16T13:07:23.890860921Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:07:23.891278 dockerd[1793]: time="2025-12-16T13:07:23.891021938Z" level=info msg="Initializing buildkit" Dec 16 13:07:23.932408 dockerd[1793]: time="2025-12-16T13:07:23.932169708Z" level=info msg="Completed buildkit initialization" Dec 16 13:07:23.946000 dockerd[1793]: time="2025-12-16T13:07:23.945801694Z" level=info msg="Daemon has completed initialization" Dec 16 13:07:23.946302 dockerd[1793]: time="2025-12-16T13:07:23.946231399Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:07:23.947661 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:07:24.306705 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3306851962-merged.mount: Deactivated successfully. Dec 16 13:07:24.716010 containerd[1540]: time="2025-12-16T13:07:24.715778165Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 13:07:25.366188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount262335594.mount: Deactivated successfully. Dec 16 13:07:26.747155 containerd[1540]: time="2025-12-16T13:07:26.747061716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:26.749299 containerd[1540]: time="2025-12-16T13:07:26.749180656Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Dec 16 13:07:26.752044 containerd[1540]: time="2025-12-16T13:07:26.751944995Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:26.755147 containerd[1540]: time="2025-12-16T13:07:26.755023524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:26.757189 containerd[1540]: time="2025-12-16T13:07:26.756466667Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.040620776s" Dec 16 13:07:26.757407 containerd[1540]: time="2025-12-16T13:07:26.757220601Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 16 13:07:26.758195 containerd[1540]: time="2025-12-16T13:07:26.758148407Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 13:07:28.435390 containerd[1540]: time="2025-12-16T13:07:28.435210814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:28.437573 containerd[1540]: time="2025-12-16T13:07:28.437445275Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Dec 16 13:07:28.439475 containerd[1540]: time="2025-12-16T13:07:28.439371301Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:28.445938 containerd[1540]: time="2025-12-16T13:07:28.445829438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:28.448910 containerd[1540]: time="2025-12-16T13:07:28.448826703Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.690621153s" Dec 16 13:07:28.448910 containerd[1540]: time="2025-12-16T13:07:28.448901557Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 16 13:07:28.449987 containerd[1540]: time="2025-12-16T13:07:28.449646087Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 13:07:29.854432 containerd[1540]: time="2025-12-16T13:07:29.854021270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:29.860747 containerd[1540]: time="2025-12-16T13:07:29.859716178Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Dec 16 13:07:29.868478 containerd[1540]: time="2025-12-16T13:07:29.868316257Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:29.881759 containerd[1540]: time="2025-12-16T13:07:29.881640612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:29.882747 containerd[1540]: time="2025-12-16T13:07:29.882666957Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.432955968s" Dec 16 13:07:29.882747 containerd[1540]: time="2025-12-16T13:07:29.882746672Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 16 13:07:29.883584 containerd[1540]: time="2025-12-16T13:07:29.883542341Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 13:07:30.671470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:07:30.675748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:07:30.935482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:30.946200 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:07:31.039985 kubelet[2088]: E1216 13:07:31.039915 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:07:31.044391 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:07:31.044801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:07:31.045329 systemd[1]: kubelet.service: Consumed 267ms CPU time, 109.7M memory peak. Dec 16 13:07:31.343663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1515016881.mount: Deactivated successfully. Dec 16 13:07:31.792341 containerd[1540]: time="2025-12-16T13:07:31.792186880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:31.793243 containerd[1540]: time="2025-12-16T13:07:31.793202810Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Dec 16 13:07:31.794637 containerd[1540]: time="2025-12-16T13:07:31.793803409Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:31.795966 containerd[1540]: time="2025-12-16T13:07:31.795926267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:31.796837 containerd[1540]: time="2025-12-16T13:07:31.796798535Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.913208646s" Dec 16 13:07:31.796908 containerd[1540]: time="2025-12-16T13:07:31.796841225Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 16 13:07:31.797769 containerd[1540]: time="2025-12-16T13:07:31.797704609Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 13:07:31.799419 systemd-resolved[1379]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Dec 16 13:07:32.348520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4020565803.mount: Deactivated successfully. Dec 16 13:07:33.888232 containerd[1540]: time="2025-12-16T13:07:33.888139995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:33.890138 containerd[1540]: time="2025-12-16T13:07:33.889703093Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Dec 16 13:07:33.891115 containerd[1540]: time="2025-12-16T13:07:33.891034456Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:33.896319 containerd[1540]: time="2025-12-16T13:07:33.896150756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:33.898368 containerd[1540]: time="2025-12-16T13:07:33.898269567Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.100515239s" Dec 16 13:07:33.898618 containerd[1540]: time="2025-12-16T13:07:33.898589245Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 16 13:07:33.899725 containerd[1540]: time="2025-12-16T13:07:33.899683504Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 13:07:34.452991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount902143682.mount: Deactivated successfully. Dec 16 13:07:34.462852 containerd[1540]: time="2025-12-16T13:07:34.462590097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:34.464868 containerd[1540]: time="2025-12-16T13:07:34.464395974Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Dec 16 13:07:34.466588 containerd[1540]: time="2025-12-16T13:07:34.466454370Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:34.472143 containerd[1540]: time="2025-12-16T13:07:34.472044786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:34.473459 containerd[1540]: time="2025-12-16T13:07:34.473176875Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 573.126459ms" Dec 16 13:07:34.473459 containerd[1540]: time="2025-12-16T13:07:34.473247273Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 16 13:07:34.474456 containerd[1540]: time="2025-12-16T13:07:34.474412825Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 13:07:34.885674 systemd-resolved[1379]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Dec 16 13:07:35.077239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1294011251.mount: Deactivated successfully. Dec 16 13:07:38.077195 containerd[1540]: time="2025-12-16T13:07:38.077118346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:38.080299 containerd[1540]: time="2025-12-16T13:07:38.080147365Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Dec 16 13:07:38.085407 containerd[1540]: time="2025-12-16T13:07:38.083612655Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:38.090085 containerd[1540]: time="2025-12-16T13:07:38.090013150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:38.098227 containerd[1540]: time="2025-12-16T13:07:38.094396611Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.619489812s" Dec 16 13:07:38.098227 containerd[1540]: time="2025-12-16T13:07:38.094498159Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 16 13:07:41.296497 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:07:41.302773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:07:41.573566 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:41.590334 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:07:41.661071 kubelet[2237]: E1216 13:07:41.660955 2237 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:07:41.665004 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:07:41.665531 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:07:41.666553 systemd[1]: kubelet.service: Consumed 238ms CPU time, 108.6M memory peak. Dec 16 13:07:44.221836 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:44.222857 systemd[1]: kubelet.service: Consumed 238ms CPU time, 108.6M memory peak. Dec 16 13:07:44.226189 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:07:44.273853 systemd[1]: Reload requested from client PID 2251 ('systemctl') (unit session-7.scope)... Dec 16 13:07:44.273883 systemd[1]: Reloading... Dec 16 13:07:44.455825 zram_generator::config[2297]: No configuration found. Dec 16 13:07:44.763216 systemd[1]: Reloading finished in 488 ms. Dec 16 13:07:44.846746 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:07:44.847091 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:07:44.847844 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:44.847919 systemd[1]: kubelet.service: Consumed 154ms CPU time, 98.2M memory peak. Dec 16 13:07:44.851970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:07:45.074199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:45.092040 (kubelet)[2349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:07:45.149782 kubelet[2349]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:07:45.151340 kubelet[2349]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:07:45.151340 kubelet[2349]: I1216 13:07:45.150523 2349 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:07:45.575765 kubelet[2349]: I1216 13:07:45.575700 2349 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:07:45.575765 kubelet[2349]: I1216 13:07:45.575758 2349 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:07:45.578015 kubelet[2349]: I1216 13:07:45.577929 2349 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:07:45.579369 kubelet[2349]: I1216 13:07:45.579242 2349 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:07:45.579856 kubelet[2349]: I1216 13:07:45.579778 2349 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:07:45.597290 kubelet[2349]: I1216 13:07:45.597201 2349 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:07:45.599115 kubelet[2349]: E1216 13:07:45.599014 2349 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://143.198.151.179:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.151.179:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:07:45.616485 kubelet[2349]: I1216 13:07:45.616263 2349 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:07:45.623833 kubelet[2349]: I1216 13:07:45.623752 2349 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:07:45.625689 kubelet[2349]: I1216 13:07:45.625577 2349 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:07:45.628969 kubelet[2349]: I1216 13:07:45.625671 2349 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-e-d5fd5cf192","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:07:45.628969 kubelet[2349]: I1216 13:07:45.628959 2349 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:07:45.628969 kubelet[2349]: I1216 13:07:45.628987 2349 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:07:45.629289 kubelet[2349]: I1216 13:07:45.629212 2349 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:07:45.633963 kubelet[2349]: I1216 13:07:45.633881 2349 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:07:45.634478 kubelet[2349]: I1216 13:07:45.634431 2349 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:07:45.637239 kubelet[2349]: E1216 13:07:45.637132 2349 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.198.151.179:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-e-d5fd5cf192&limit=500&resourceVersion=0\": dial tcp 143.198.151.179:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:07:45.637239 kubelet[2349]: I1216 13:07:45.637233 2349 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:07:45.637514 kubelet[2349]: I1216 13:07:45.637288 2349 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:07:45.637514 kubelet[2349]: I1216 13:07:45.637322 2349 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:07:45.647693 kubelet[2349]: E1216 13:07:45.646397 2349 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.198.151.179:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.151.179:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:07:45.649740 kubelet[2349]: I1216 13:07:45.649699 2349 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:07:45.651385 kubelet[2349]: I1216 13:07:45.650418 2349 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:07:45.651385 kubelet[2349]: I1216 13:07:45.650458 2349 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:07:45.651385 kubelet[2349]: W1216 13:07:45.650527 2349 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:07:45.655734 kubelet[2349]: I1216 13:07:45.655628 2349 server.go:1262] "Started kubelet" Dec 16 13:07:45.658751 kubelet[2349]: I1216 13:07:45.658251 2349 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:07:45.662388 kubelet[2349]: I1216 13:07:45.662314 2349 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:07:45.664382 kubelet[2349]: I1216 13:07:45.664113 2349 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:07:45.666147 kubelet[2349]: I1216 13:07:45.665941 2349 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:07:45.669089 kubelet[2349]: E1216 13:07:45.667298 2349 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-e-d5fd5cf192\" not found" Dec 16 13:07:45.673418 kubelet[2349]: I1216 13:07:45.673383 2349 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:07:45.680703 kubelet[2349]: I1216 13:07:45.680655 2349 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:07:45.680894 kubelet[2349]: I1216 13:07:45.680789 2349 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:07:45.686330 kubelet[2349]: E1216 13:07:45.681434 2349 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.151.179:6443/api/v1/namespaces/default/events\": dial tcp 143.198.151.179:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-e-d5fd5cf192.1881b405bda653d8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-e-d5fd5cf192,UID:ci-4459.2.2-e-d5fd5cf192,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-e-d5fd5cf192,},FirstTimestamp:2025-12-16 13:07:45.655575512 +0000 UTC m=+0.558697511,LastTimestamp:2025-12-16 13:07:45.655575512 +0000 UTC m=+0.558697511,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-e-d5fd5cf192,}" Dec 16 13:07:45.686330 kubelet[2349]: E1216 13:07:45.685325 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.151.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-e-d5fd5cf192?timeout=10s\": dial tcp 143.198.151.179:6443: connect: connection refused" interval="200ms" Dec 16 13:07:45.688397 kubelet[2349]: I1216 13:07:45.687246 2349 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:07:45.688397 kubelet[2349]: I1216 13:07:45.687835 2349 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:07:45.688397 kubelet[2349]: I1216 13:07:45.688198 2349 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:07:45.690910 kubelet[2349]: I1216 13:07:45.690865 2349 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:07:45.698145 kubelet[2349]: I1216 13:07:45.698111 2349 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:07:45.698315 kubelet[2349]: I1216 13:07:45.698304 2349 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:07:45.708608 kubelet[2349]: E1216 13:07:45.708534 2349 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.198.151.179:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.151.179:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:07:45.711400 kubelet[2349]: I1216 13:07:45.711213 2349 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:07:45.713177 kubelet[2349]: I1216 13:07:45.713117 2349 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:07:45.713177 kubelet[2349]: I1216 13:07:45.713166 2349 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:07:45.713408 kubelet[2349]: I1216 13:07:45.713218 2349 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:07:45.713408 kubelet[2349]: E1216 13:07:45.713298 2349 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:07:45.724605 kubelet[2349]: E1216 13:07:45.724544 2349 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.198.151.179:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.151.179:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:07:45.725416 kubelet[2349]: E1216 13:07:45.725324 2349 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:07:45.732009 kubelet[2349]: I1216 13:07:45.731544 2349 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:07:45.732009 kubelet[2349]: I1216 13:07:45.732004 2349 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:07:45.732216 kubelet[2349]: I1216 13:07:45.732048 2349 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:07:45.736549 kubelet[2349]: I1216 13:07:45.736090 2349 policy_none.go:49] "None policy: Start" Dec 16 13:07:45.736549 kubelet[2349]: I1216 13:07:45.736128 2349 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:07:45.736549 kubelet[2349]: I1216 13:07:45.736143 2349 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:07:45.738850 kubelet[2349]: I1216 13:07:45.738794 2349 policy_none.go:47] "Start" Dec 16 13:07:45.746448 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:07:45.767871 kubelet[2349]: E1216 13:07:45.767821 2349 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-e-d5fd5cf192\" not found" Dec 16 13:07:45.771189 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:07:45.778958 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:07:45.793404 kubelet[2349]: E1216 13:07:45.793328 2349 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:07:45.793672 kubelet[2349]: I1216 13:07:45.793642 2349 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:07:45.793729 kubelet[2349]: I1216 13:07:45.793666 2349 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:07:45.796097 kubelet[2349]: I1216 13:07:45.796050 2349 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:07:45.798960 kubelet[2349]: E1216 13:07:45.798904 2349 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:07:45.799804 kubelet[2349]: E1216 13:07:45.798980 2349 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-e-d5fd5cf192\" not found" Dec 16 13:07:45.831753 systemd[1]: Created slice kubepods-burstable-pod9f8fdf0a0e3c431976d5ee2f503d43b4.slice - libcontainer container kubepods-burstable-pod9f8fdf0a0e3c431976d5ee2f503d43b4.slice. Dec 16 13:07:45.844477 kubelet[2349]: E1216 13:07:45.844398 2349 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-e-d5fd5cf192\" not found" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:45.850145 systemd[1]: Created slice kubepods-burstable-podb8948a6a7d60b38c7afe3e03e23e299e.slice - libcontainer container kubepods-burstable-podb8948a6a7d60b38c7afe3e03e23e299e.slice. Dec 16 13:07:45.855249 kubelet[2349]: E1216 13:07:45.855190 2349 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-e-d5fd5cf192\" not found" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:45.859177 systemd[1]: Created slice kubepods-burstable-podcc3f4eb3e08fc1cf813c05fe6bd10048.slice - libcontainer container kubepods-burstable-podcc3f4eb3e08fc1cf813c05fe6bd10048.slice. Dec 16 13:07:45.862238 kubelet[2349]: E1216 13:07:45.861974 2349 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-e-d5fd5cf192\" not found" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:45.886402 kubelet[2349]: E1216 13:07:45.886298 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.151.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-e-d5fd5cf192?timeout=10s\": dial tcp 143.198.151.179:6443: connect: connection refused" interval="400ms" Dec 16 13:07:45.895773 kubelet[2349]: I1216 13:07:45.895701 2349 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:45.896295 kubelet[2349]: E1216 13:07:45.896238 2349 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.151.179:6443/api/v1/nodes\": dial tcp 143.198.151.179:6443: connect: connection refused" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:45.982742 kubelet[2349]: I1216 13:07:45.982529 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f8fdf0a0e3c431976d5ee2f503d43b4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-e-d5fd5cf192\" (UID: \"9f8fdf0a0e3c431976d5ee2f503d43b4\") " pod="kube-system/kube-apiserver-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:45.982742 kubelet[2349]: I1216 13:07:45.982626 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b8948a6a7d60b38c7afe3e03e23e299e-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-e-d5fd5cf192\" (UID: \"b8948a6a7d60b38c7afe3e03e23e299e\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:45.982742 kubelet[2349]: I1216 13:07:45.982655 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8948a6a7d60b38c7afe3e03e23e299e-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-e-d5fd5cf192\" (UID: \"b8948a6a7d60b38c7afe3e03e23e299e\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:45.982742 kubelet[2349]: I1216 13:07:45.982685 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b8948a6a7d60b38c7afe3e03e23e299e-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-e-d5fd5cf192\" (UID: \"b8948a6a7d60b38c7afe3e03e23e299e\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:45.982742 kubelet[2349]: I1216 13:07:45.982763 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8948a6a7d60b38c7afe3e03e23e299e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-e-d5fd5cf192\" (UID: \"b8948a6a7d60b38c7afe3e03e23e299e\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:45.983145 kubelet[2349]: I1216 13:07:45.982799 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cc3f4eb3e08fc1cf813c05fe6bd10048-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-e-d5fd5cf192\" (UID: \"cc3f4eb3e08fc1cf813c05fe6bd10048\") " pod="kube-system/kube-scheduler-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:45.983145 kubelet[2349]: I1216 13:07:45.982825 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8948a6a7d60b38c7afe3e03e23e299e-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-e-d5fd5cf192\" (UID: \"b8948a6a7d60b38c7afe3e03e23e299e\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:45.983145 kubelet[2349]: I1216 13:07:45.982852 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f8fdf0a0e3c431976d5ee2f503d43b4-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-e-d5fd5cf192\" (UID: \"9f8fdf0a0e3c431976d5ee2f503d43b4\") " pod="kube-system/kube-apiserver-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:45.983145 kubelet[2349]: I1216 13:07:45.982882 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f8fdf0a0e3c431976d5ee2f503d43b4-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-e-d5fd5cf192\" (UID: \"9f8fdf0a0e3c431976d5ee2f503d43b4\") " pod="kube-system/kube-apiserver-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:46.098741 kubelet[2349]: I1216 13:07:46.098582 2349 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:46.099135 kubelet[2349]: E1216 13:07:46.099094 2349 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.151.179:6443/api/v1/nodes\": dial tcp 143.198.151.179:6443: connect: connection refused" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:46.149131 kubelet[2349]: E1216 13:07:46.149025 2349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:46.150526 containerd[1540]: time="2025-12-16T13:07:46.150468437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-e-d5fd5cf192,Uid:9f8fdf0a0e3c431976d5ee2f503d43b4,Namespace:kube-system,Attempt:0,}" Dec 16 13:07:46.163625 kubelet[2349]: E1216 13:07:46.162840 2349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:46.167947 kubelet[2349]: E1216 13:07:46.167187 2349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:46.172388 containerd[1540]: time="2025-12-16T13:07:46.172290997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-e-d5fd5cf192,Uid:b8948a6a7d60b38c7afe3e03e23e299e,Namespace:kube-system,Attempt:0,}" Dec 16 13:07:46.173091 containerd[1540]: time="2025-12-16T13:07:46.173051046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-e-d5fd5cf192,Uid:cc3f4eb3e08fc1cf813c05fe6bd10048,Namespace:kube-system,Attempt:0,}" Dec 16 13:07:46.174371 systemd-resolved[1379]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Dec 16 13:07:46.289861 kubelet[2349]: E1216 13:07:46.289772 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.151.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-e-d5fd5cf192?timeout=10s\": dial tcp 143.198.151.179:6443: connect: connection refused" interval="800ms" Dec 16 13:07:46.501122 kubelet[2349]: I1216 13:07:46.500833 2349 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:46.501896 kubelet[2349]: E1216 13:07:46.501768 2349 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.151.179:6443/api/v1/nodes\": dial tcp 143.198.151.179:6443: connect: connection refused" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:46.533976 kubelet[2349]: E1216 13:07:46.533900 2349 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.198.151.179:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.151.179:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:07:46.630581 kubelet[2349]: E1216 13:07:46.630508 2349 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.198.151.179:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.151.179:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:07:46.725026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3624043723.mount: Deactivated successfully. Dec 16 13:07:46.741670 containerd[1540]: time="2025-12-16T13:07:46.741565116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:07:46.743744 containerd[1540]: time="2025-12-16T13:07:46.743509522Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 16 13:07:46.749048 containerd[1540]: time="2025-12-16T13:07:46.748976139Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:07:46.751634 containerd[1540]: time="2025-12-16T13:07:46.750217638Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:07:46.751634 containerd[1540]: time="2025-12-16T13:07:46.751290362Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:07:46.753308 containerd[1540]: time="2025-12-16T13:07:46.753234577Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:07:46.758188 containerd[1540]: time="2025-12-16T13:07:46.757519974Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:07:46.759477 containerd[1540]: time="2025-12-16T13:07:46.759412388Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 576.926861ms" Dec 16 13:07:46.762878 containerd[1540]: time="2025-12-16T13:07:46.762811668Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 580.328496ms" Dec 16 13:07:46.763830 containerd[1540]: time="2025-12-16T13:07:46.763744115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:07:46.765178 containerd[1540]: time="2025-12-16T13:07:46.765121725Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 592.934561ms" Dec 16 13:07:46.930113 containerd[1540]: time="2025-12-16T13:07:46.929660331Z" level=info msg="connecting to shim d3c2653b648655b7146be176db315b28c779350eba51e2c0bab172490af7a600" address="unix:///run/containerd/s/427a850c430b546c50d961ba8262253df994364e4b2234b5b996c5feed2c9dd8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:46.934232 containerd[1540]: time="2025-12-16T13:07:46.934147703Z" level=info msg="connecting to shim 98394245e15fc7d06bf1cbb39ede38ea9f1ef544795fb328441ff352a6810299" address="unix:///run/containerd/s/d9c3b082ae09f0d0b9891e7431667a218c030beaf0ffdbf11c1632b652a96ae6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:46.935436 kubelet[2349]: E1216 13:07:46.935338 2349 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.198.151.179:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.151.179:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:07:46.945570 containerd[1540]: time="2025-12-16T13:07:46.945295953Z" level=info msg="connecting to shim 30d44b54ca16bccaacdaa0dcdb470f91248e851ff0df8c13aa9a507f715ac140" address="unix:///run/containerd/s/6e00657ed2e9a8114c4e1bafa37fd762adbe0ea7de00524ddd6c15b5b3cf2242" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:47.058680 systemd[1]: Started cri-containerd-30d44b54ca16bccaacdaa0dcdb470f91248e851ff0df8c13aa9a507f715ac140.scope - libcontainer container 30d44b54ca16bccaacdaa0dcdb470f91248e851ff0df8c13aa9a507f715ac140. Dec 16 13:07:47.073158 systemd[1]: Started cri-containerd-98394245e15fc7d06bf1cbb39ede38ea9f1ef544795fb328441ff352a6810299.scope - libcontainer container 98394245e15fc7d06bf1cbb39ede38ea9f1ef544795fb328441ff352a6810299. Dec 16 13:07:47.076800 systemd[1]: Started cri-containerd-d3c2653b648655b7146be176db315b28c779350eba51e2c0bab172490af7a600.scope - libcontainer container d3c2653b648655b7146be176db315b28c779350eba51e2c0bab172490af7a600. Dec 16 13:07:47.092153 kubelet[2349]: E1216 13:07:47.091839 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.151.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-e-d5fd5cf192?timeout=10s\": dial tcp 143.198.151.179:6443: connect: connection refused" interval="1.6s" Dec 16 13:07:47.210986 containerd[1540]: time="2025-12-16T13:07:47.210875621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-e-d5fd5cf192,Uid:9f8fdf0a0e3c431976d5ee2f503d43b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"98394245e15fc7d06bf1cbb39ede38ea9f1ef544795fb328441ff352a6810299\"" Dec 16 13:07:47.219692 kubelet[2349]: E1216 13:07:47.219601 2349 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.198.151.179:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-e-d5fd5cf192&limit=500&resourceVersion=0\": dial tcp 143.198.151.179:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:07:47.224633 kubelet[2349]: E1216 13:07:47.224205 2349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:47.246260 containerd[1540]: time="2025-12-16T13:07:47.246174273Z" level=info msg="CreateContainer within sandbox \"98394245e15fc7d06bf1cbb39ede38ea9f1ef544795fb328441ff352a6810299\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:07:47.251007 containerd[1540]: time="2025-12-16T13:07:47.250791717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-e-d5fd5cf192,Uid:b8948a6a7d60b38c7afe3e03e23e299e,Namespace:kube-system,Attempt:0,} returns sandbox id \"30d44b54ca16bccaacdaa0dcdb470f91248e851ff0df8c13aa9a507f715ac140\"" Dec 16 13:07:47.259549 kubelet[2349]: E1216 13:07:47.259084 2349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:47.265268 containerd[1540]: time="2025-12-16T13:07:47.265118788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-e-d5fd5cf192,Uid:cc3f4eb3e08fc1cf813c05fe6bd10048,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3c2653b648655b7146be176db315b28c779350eba51e2c0bab172490af7a600\"" Dec 16 13:07:47.267043 kubelet[2349]: E1216 13:07:47.266987 2349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:47.276389 containerd[1540]: time="2025-12-16T13:07:47.275479847Z" level=info msg="CreateContainer within sandbox \"30d44b54ca16bccaacdaa0dcdb470f91248e851ff0df8c13aa9a507f715ac140\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:07:47.276389 containerd[1540]: time="2025-12-16T13:07:47.276169933Z" level=info msg="CreateContainer within sandbox \"d3c2653b648655b7146be176db315b28c779350eba51e2c0bab172490af7a600\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:07:47.277337 containerd[1540]: time="2025-12-16T13:07:47.277246371Z" level=info msg="Container 800eeb753347c00faf37d995d32545e2839774d44ad70e588b7fb996277f8ca9: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:47.300635 containerd[1540]: time="2025-12-16T13:07:47.300496953Z" level=info msg="Container 11b36948843d06b4900eb2d768385224cf729c860d9839f149ffa8323b573847: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:47.306621 kubelet[2349]: I1216 13:07:47.306558 2349 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:47.307327 kubelet[2349]: E1216 13:07:47.307268 2349 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.151.179:6443/api/v1/nodes\": dial tcp 143.198.151.179:6443: connect: connection refused" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:47.309005 containerd[1540]: time="2025-12-16T13:07:47.308880420Z" level=info msg="Container 4930890213aa27b1a3cdfadd56cd9bdb3de8af70b37f4892991f0547aa57944e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:47.309864 containerd[1540]: time="2025-12-16T13:07:47.309795823Z" level=info msg="CreateContainer within sandbox \"98394245e15fc7d06bf1cbb39ede38ea9f1ef544795fb328441ff352a6810299\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"800eeb753347c00faf37d995d32545e2839774d44ad70e588b7fb996277f8ca9\"" Dec 16 13:07:47.311974 containerd[1540]: time="2025-12-16T13:07:47.311869434Z" level=info msg="StartContainer for \"800eeb753347c00faf37d995d32545e2839774d44ad70e588b7fb996277f8ca9\"" Dec 16 13:07:47.323704 containerd[1540]: time="2025-12-16T13:07:47.323647481Z" level=info msg="connecting to shim 800eeb753347c00faf37d995d32545e2839774d44ad70e588b7fb996277f8ca9" address="unix:///run/containerd/s/d9c3b082ae09f0d0b9891e7431667a218c030beaf0ffdbf11c1632b652a96ae6" protocol=ttrpc version=3 Dec 16 13:07:47.325049 containerd[1540]: time="2025-12-16T13:07:47.324619612Z" level=info msg="CreateContainer within sandbox \"30d44b54ca16bccaacdaa0dcdb470f91248e851ff0df8c13aa9a507f715ac140\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"11b36948843d06b4900eb2d768385224cf729c860d9839f149ffa8323b573847\"" Dec 16 13:07:47.331605 containerd[1540]: time="2025-12-16T13:07:47.331525425Z" level=info msg="StartContainer for \"11b36948843d06b4900eb2d768385224cf729c860d9839f149ffa8323b573847\"" Dec 16 13:07:47.339320 containerd[1540]: time="2025-12-16T13:07:47.339139051Z" level=info msg="connecting to shim 11b36948843d06b4900eb2d768385224cf729c860d9839f149ffa8323b573847" address="unix:///run/containerd/s/6e00657ed2e9a8114c4e1bafa37fd762adbe0ea7de00524ddd6c15b5b3cf2242" protocol=ttrpc version=3 Dec 16 13:07:47.345724 containerd[1540]: time="2025-12-16T13:07:47.345636852Z" level=info msg="CreateContainer within sandbox \"d3c2653b648655b7146be176db315b28c779350eba51e2c0bab172490af7a600\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4930890213aa27b1a3cdfadd56cd9bdb3de8af70b37f4892991f0547aa57944e\"" Dec 16 13:07:47.351154 containerd[1540]: time="2025-12-16T13:07:47.351075512Z" level=info msg="StartContainer for \"4930890213aa27b1a3cdfadd56cd9bdb3de8af70b37f4892991f0547aa57944e\"" Dec 16 13:07:47.360646 containerd[1540]: time="2025-12-16T13:07:47.360376757Z" level=info msg="connecting to shim 4930890213aa27b1a3cdfadd56cd9bdb3de8af70b37f4892991f0547aa57944e" address="unix:///run/containerd/s/427a850c430b546c50d961ba8262253df994364e4b2234b5b996c5feed2c9dd8" protocol=ttrpc version=3 Dec 16 13:07:47.376981 systemd[1]: Started cri-containerd-800eeb753347c00faf37d995d32545e2839774d44ad70e588b7fb996277f8ca9.scope - libcontainer container 800eeb753347c00faf37d995d32545e2839774d44ad70e588b7fb996277f8ca9. Dec 16 13:07:47.402036 systemd[1]: Started cri-containerd-11b36948843d06b4900eb2d768385224cf729c860d9839f149ffa8323b573847.scope - libcontainer container 11b36948843d06b4900eb2d768385224cf729c860d9839f149ffa8323b573847. Dec 16 13:07:47.419699 systemd[1]: Started cri-containerd-4930890213aa27b1a3cdfadd56cd9bdb3de8af70b37f4892991f0547aa57944e.scope - libcontainer container 4930890213aa27b1a3cdfadd56cd9bdb3de8af70b37f4892991f0547aa57944e. Dec 16 13:07:47.548953 containerd[1540]: time="2025-12-16T13:07:47.548877438Z" level=info msg="StartContainer for \"11b36948843d06b4900eb2d768385224cf729c860d9839f149ffa8323b573847\" returns successfully" Dec 16 13:07:47.582930 containerd[1540]: time="2025-12-16T13:07:47.581213759Z" level=info msg="StartContainer for \"800eeb753347c00faf37d995d32545e2839774d44ad70e588b7fb996277f8ca9\" returns successfully" Dec 16 13:07:47.638172 containerd[1540]: time="2025-12-16T13:07:47.638074350Z" level=info msg="StartContainer for \"4930890213aa27b1a3cdfadd56cd9bdb3de8af70b37f4892991f0547aa57944e\" returns successfully" Dec 16 13:07:47.679864 kubelet[2349]: E1216 13:07:47.679802 2349 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://143.198.151.179:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.151.179:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:07:47.762862 kubelet[2349]: E1216 13:07:47.762780 2349 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-e-d5fd5cf192\" not found" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:47.763421 kubelet[2349]: E1216 13:07:47.763118 2349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:47.772707 kubelet[2349]: E1216 13:07:47.772655 2349 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-e-d5fd5cf192\" not found" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:47.773270 kubelet[2349]: E1216 13:07:47.772957 2349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:47.778911 kubelet[2349]: E1216 13:07:47.778557 2349 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-e-d5fd5cf192\" not found" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:47.778911 kubelet[2349]: E1216 13:07:47.778894 2349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:48.785299 kubelet[2349]: E1216 13:07:48.784938 2349 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-e-d5fd5cf192\" not found" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:48.785299 kubelet[2349]: E1216 13:07:48.785049 2349 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-e-d5fd5cf192\" not found" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:48.785299 kubelet[2349]: E1216 13:07:48.785168 2349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:48.785299 kubelet[2349]: E1216 13:07:48.785215 2349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:48.909702 kubelet[2349]: I1216 13:07:48.909654 2349 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:50.344412 kubelet[2349]: E1216 13:07:50.342289 2349 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-e-d5fd5cf192\" not found" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:50.347499 kubelet[2349]: E1216 13:07:50.345119 2349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:51.634747 kubelet[2349]: E1216 13:07:51.634676 2349 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.2-e-d5fd5cf192\" not found" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:51.642032 kubelet[2349]: I1216 13:07:51.641913 2349 apiserver.go:52] "Watching apiserver" Dec 16 13:07:51.683226 kubelet[2349]: I1216 13:07:51.683070 2349 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:07:51.713385 kubelet[2349]: I1216 13:07:51.712822 2349 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:51.769515 kubelet[2349]: I1216 13:07:51.769455 2349 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:51.787584 kubelet[2349]: E1216 13:07:51.787521 2349 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-e-d5fd5cf192\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:51.787584 kubelet[2349]: I1216 13:07:51.787577 2349 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:51.791787 kubelet[2349]: E1216 13:07:51.791685 2349 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-e-d5fd5cf192\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:51.791787 kubelet[2349]: I1216 13:07:51.791763 2349 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:51.796981 kubelet[2349]: E1216 13:07:51.796907 2349 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-e-d5fd5cf192\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:52.623330 kubelet[2349]: I1216 13:07:52.623282 2349 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:52.634922 kubelet[2349]: I1216 13:07:52.634796 2349 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:07:52.637420 kubelet[2349]: E1216 13:07:52.637271 2349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:52.801774 kubelet[2349]: E1216 13:07:52.801616 2349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:54.520213 systemd[1]: Reload requested from client PID 2633 ('systemctl') (unit session-7.scope)... Dec 16 13:07:54.520258 systemd[1]: Reloading... Dec 16 13:07:54.697415 zram_generator::config[2679]: No configuration found. Dec 16 13:07:55.097780 systemd[1]: Reloading finished in 572 ms. Dec 16 13:07:55.142886 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:07:55.156526 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:07:55.157769 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:55.159541 systemd[1]: kubelet.service: Consumed 1.379s CPU time, 121.5M memory peak. Dec 16 13:07:55.162842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:07:55.638256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:55.658308 (kubelet)[2726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:07:55.810645 kubelet[2726]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:07:55.810645 kubelet[2726]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:07:55.810645 kubelet[2726]: I1216 13:07:55.809729 2726 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:07:55.827380 kubelet[2726]: I1216 13:07:55.827280 2726 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:07:55.827812 kubelet[2726]: I1216 13:07:55.827624 2726 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:07:55.831578 kubelet[2726]: I1216 13:07:55.831452 2726 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:07:55.834209 kubelet[2726]: I1216 13:07:55.831537 2726 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:07:55.837228 kubelet[2726]: I1216 13:07:55.837049 2726 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:07:55.852872 kubelet[2726]: I1216 13:07:55.851253 2726 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 13:07:55.860191 kubelet[2726]: I1216 13:07:55.860141 2726 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:07:55.901337 kubelet[2726]: I1216 13:07:55.901153 2726 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:07:55.914382 kubelet[2726]: I1216 13:07:55.913713 2726 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:07:55.914382 kubelet[2726]: I1216 13:07:55.914115 2726 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:07:55.914735 kubelet[2726]: I1216 13:07:55.914156 2726 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-e-d5fd5cf192","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:07:55.914982 kubelet[2726]: I1216 13:07:55.914969 2726 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:07:55.915044 kubelet[2726]: I1216 13:07:55.915037 2726 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:07:55.915146 kubelet[2726]: I1216 13:07:55.915133 2726 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:07:55.916658 kubelet[2726]: I1216 13:07:55.916604 2726 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:07:55.924608 kubelet[2726]: I1216 13:07:55.924492 2726 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:07:55.924608 kubelet[2726]: I1216 13:07:55.924549 2726 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:07:55.925375 kubelet[2726]: I1216 13:07:55.924903 2726 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:07:55.927563 kubelet[2726]: I1216 13:07:55.927151 2726 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:07:55.958208 kubelet[2726]: I1216 13:07:55.956742 2726 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:07:55.961837 kubelet[2726]: I1216 13:07:55.960925 2726 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:07:55.963341 kubelet[2726]: I1216 13:07:55.963276 2726 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:07:55.985878 kubelet[2726]: I1216 13:07:55.985810 2726 server.go:1262] "Started kubelet" Dec 16 13:07:55.993937 kubelet[2726]: I1216 13:07:55.993492 2726 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:07:55.997597 kubelet[2726]: I1216 13:07:55.996796 2726 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:07:56.002531 kubelet[2726]: I1216 13:07:55.997954 2726 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:07:56.019716 kubelet[2726]: I1216 13:07:56.003297 2726 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:07:56.024829 kubelet[2726]: I1216 13:07:56.011779 2726 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:07:56.024829 kubelet[2726]: I1216 13:07:56.022430 2726 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:07:56.025964 kubelet[2726]: I1216 13:07:56.025901 2726 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:07:56.033759 kubelet[2726]: I1216 13:07:56.033705 2726 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:07:56.037656 kubelet[2726]: I1216 13:07:56.035697 2726 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:07:56.040235 kubelet[2726]: I1216 13:07:56.039671 2726 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:07:56.056068 kubelet[2726]: I1216 13:07:56.056020 2726 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:07:56.056551 kubelet[2726]: I1216 13:07:56.056485 2726 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:07:56.058016 kubelet[2726]: I1216 13:07:56.057962 2726 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:07:56.068280 kubelet[2726]: E1216 13:07:56.068187 2726 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:07:56.156094 kubelet[2726]: I1216 13:07:56.154786 2726 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:07:56.165984 kubelet[2726]: I1216 13:07:56.165854 2726 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:07:56.167624 kubelet[2726]: I1216 13:07:56.167518 2726 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:07:56.167997 kubelet[2726]: I1216 13:07:56.167842 2726 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:07:56.168226 kubelet[2726]: E1216 13:07:56.167970 2726 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:07:56.208740 kubelet[2726]: I1216 13:07:56.207918 2726 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:07:56.208740 kubelet[2726]: I1216 13:07:56.207941 2726 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:07:56.208740 kubelet[2726]: I1216 13:07:56.207979 2726 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:07:56.208740 kubelet[2726]: I1216 13:07:56.208216 2726 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:07:56.208740 kubelet[2726]: I1216 13:07:56.208243 2726 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:07:56.208740 kubelet[2726]: I1216 13:07:56.208278 2726 policy_none.go:49] "None policy: Start" Dec 16 13:07:56.208740 kubelet[2726]: I1216 13:07:56.208320 2726 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:07:56.211210 kubelet[2726]: I1216 13:07:56.210946 2726 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:07:56.212974 kubelet[2726]: I1216 13:07:56.211711 2726 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 13:07:56.212974 kubelet[2726]: I1216 13:07:56.211765 2726 policy_none.go:47] "Start" Dec 16 13:07:56.232431 kubelet[2726]: E1216 13:07:56.232391 2726 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:07:56.233807 kubelet[2726]: I1216 13:07:56.233782 2726 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:07:56.233997 kubelet[2726]: I1216 13:07:56.233953 2726 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:07:56.237057 kubelet[2726]: I1216 13:07:56.237028 2726 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:07:56.245553 kubelet[2726]: E1216 13:07:56.244537 2726 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:07:56.271652 kubelet[2726]: I1216 13:07:56.271596 2726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:56.274556 kubelet[2726]: I1216 13:07:56.273017 2726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:56.281812 kubelet[2726]: I1216 13:07:56.273316 2726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:56.300030 kubelet[2726]: I1216 13:07:56.299677 2726 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:07:56.309926 kubelet[2726]: I1216 13:07:56.309863 2726 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:07:56.313082 kubelet[2726]: I1216 13:07:56.312895 2726 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:07:56.313082 kubelet[2726]: E1216 13:07:56.312981 2726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-e-d5fd5cf192\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:56.342326 kubelet[2726]: I1216 13:07:56.341933 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f8fdf0a0e3c431976d5ee2f503d43b4-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-e-d5fd5cf192\" (UID: \"9f8fdf0a0e3c431976d5ee2f503d43b4\") " pod="kube-system/kube-apiserver-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:56.342326 kubelet[2726]: I1216 13:07:56.342020 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8948a6a7d60b38c7afe3e03e23e299e-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-e-d5fd5cf192\" (UID: \"b8948a6a7d60b38c7afe3e03e23e299e\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:56.342326 kubelet[2726]: I1216 13:07:56.342054 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8948a6a7d60b38c7afe3e03e23e299e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-e-d5fd5cf192\" (UID: \"b8948a6a7d60b38c7afe3e03e23e299e\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:56.342326 kubelet[2726]: I1216 13:07:56.342073 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cc3f4eb3e08fc1cf813c05fe6bd10048-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-e-d5fd5cf192\" (UID: \"cc3f4eb3e08fc1cf813c05fe6bd10048\") " pod="kube-system/kube-scheduler-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:56.342326 kubelet[2726]: I1216 13:07:56.342091 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f8fdf0a0e3c431976d5ee2f503d43b4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-e-d5fd5cf192\" (UID: \"9f8fdf0a0e3c431976d5ee2f503d43b4\") " pod="kube-system/kube-apiserver-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:56.342687 kubelet[2726]: I1216 13:07:56.342109 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b8948a6a7d60b38c7afe3e03e23e299e-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-e-d5fd5cf192\" (UID: \"b8948a6a7d60b38c7afe3e03e23e299e\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:56.342687 kubelet[2726]: I1216 13:07:56.342142 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8948a6a7d60b38c7afe3e03e23e299e-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-e-d5fd5cf192\" (UID: \"b8948a6a7d60b38c7afe3e03e23e299e\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:56.342687 kubelet[2726]: I1216 13:07:56.342157 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b8948a6a7d60b38c7afe3e03e23e299e-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-e-d5fd5cf192\" (UID: \"b8948a6a7d60b38c7afe3e03e23e299e\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:56.342687 kubelet[2726]: I1216 13:07:56.342193 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f8fdf0a0e3c431976d5ee2f503d43b4-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-e-d5fd5cf192\" (UID: \"9f8fdf0a0e3c431976d5ee2f503d43b4\") " pod="kube-system/kube-apiserver-ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:56.352244 kubelet[2726]: I1216 13:07:56.352111 2726 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:56.379811 kubelet[2726]: I1216 13:07:56.378999 2726 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:56.379811 kubelet[2726]: I1216 13:07:56.379161 2726 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:07:56.601376 kubelet[2726]: E1216 13:07:56.600573 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:56.611934 kubelet[2726]: E1216 13:07:56.611778 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:56.615609 kubelet[2726]: E1216 13:07:56.615550 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:56.932060 kubelet[2726]: I1216 13:07:56.931782 2726 apiserver.go:52] "Watching apiserver" Dec 16 13:07:57.038175 kubelet[2726]: I1216 13:07:57.038116 2726 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:07:57.077392 kubelet[2726]: I1216 13:07:57.077268 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-e-d5fd5cf192" podStartSLOduration=1.077233897 podStartE2EDuration="1.077233897s" podCreationTimestamp="2025-12-16 13:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:07:57.076993564 +0000 UTC m=+1.409853706" watchObservedRunningTime="2025-12-16 13:07:57.077233897 +0000 UTC m=+1.410094053" Dec 16 13:07:57.104388 kubelet[2726]: I1216 13:07:57.104275 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-e-d5fd5cf192" podStartSLOduration=5.104248345 podStartE2EDuration="5.104248345s" podCreationTimestamp="2025-12-16 13:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:07:57.101063154 +0000 UTC m=+1.433923314" watchObservedRunningTime="2025-12-16 13:07:57.104248345 +0000 UTC m=+1.437108509" Dec 16 13:07:57.151018 kubelet[2726]: I1216 13:07:57.150868 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-e-d5fd5cf192" podStartSLOduration=1.150824554 podStartE2EDuration="1.150824554s" podCreationTimestamp="2025-12-16 13:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:07:57.12399985 +0000 UTC m=+1.456860013" watchObservedRunningTime="2025-12-16 13:07:57.150824554 +0000 UTC m=+1.483684703" Dec 16 13:07:57.244813 kubelet[2726]: E1216 13:07:57.244336 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:57.246266 kubelet[2726]: E1216 13:07:57.246188 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:57.248058 kubelet[2726]: E1216 13:07:57.248019 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:58.248961 kubelet[2726]: E1216 13:07:58.248917 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:58.250133 kubelet[2726]: E1216 13:07:58.249919 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:07:59.251243 kubelet[2726]: E1216 13:07:59.251133 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:00.372273 kubelet[2726]: I1216 13:08:00.372188 2726 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:08:00.373209 containerd[1540]: time="2025-12-16T13:08:00.373091618Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:08:00.374736 kubelet[2726]: I1216 13:08:00.373586 2726 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:08:01.195792 systemd[1]: Created slice kubepods-besteffort-pod189a21c4_44f8_4870_8401_2c81b118017b.slice - libcontainer container kubepods-besteffort-pod189a21c4_44f8_4870_8401_2c81b118017b.slice. Dec 16 13:08:01.281626 kubelet[2726]: I1216 13:08:01.281559 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/189a21c4-44f8-4870-8401-2c81b118017b-xtables-lock\") pod \"kube-proxy-dg5vc\" (UID: \"189a21c4-44f8-4870-8401-2c81b118017b\") " pod="kube-system/kube-proxy-dg5vc" Dec 16 13:08:01.281626 kubelet[2726]: I1216 13:08:01.281622 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/189a21c4-44f8-4870-8401-2c81b118017b-lib-modules\") pod \"kube-proxy-dg5vc\" (UID: \"189a21c4-44f8-4870-8401-2c81b118017b\") " pod="kube-system/kube-proxy-dg5vc" Dec 16 13:08:01.282667 kubelet[2726]: I1216 13:08:01.281655 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzw5q\" (UniqueName: \"kubernetes.io/projected/189a21c4-44f8-4870-8401-2c81b118017b-kube-api-access-bzw5q\") pod \"kube-proxy-dg5vc\" (UID: \"189a21c4-44f8-4870-8401-2c81b118017b\") " pod="kube-system/kube-proxy-dg5vc" Dec 16 13:08:01.282667 kubelet[2726]: I1216 13:08:01.281687 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/189a21c4-44f8-4870-8401-2c81b118017b-kube-proxy\") pod \"kube-proxy-dg5vc\" (UID: \"189a21c4-44f8-4870-8401-2c81b118017b\") " pod="kube-system/kube-proxy-dg5vc" Dec 16 13:08:01.514653 kubelet[2726]: E1216 13:08:01.514467 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:01.518400 containerd[1540]: time="2025-12-16T13:08:01.517632504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dg5vc,Uid:189a21c4-44f8-4870-8401-2c81b118017b,Namespace:kube-system,Attempt:0,}" Dec 16 13:08:01.581309 containerd[1540]: time="2025-12-16T13:08:01.581205662Z" level=info msg="connecting to shim 7b624f6fcdd7e4e21a12565ddd3e37b91915094913db2a46eb2a2ab477cb5b32" address="unix:///run/containerd/s/693a25cfc2f49ba792492e7cfd0a11fe7738c22eeb456282d4b7db8dae9e9b35" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:01.624306 systemd[1]: Created slice kubepods-besteffort-pod2d87ecea_0907_44ac_a5b4_163975b80454.slice - libcontainer container kubepods-besteffort-pod2d87ecea_0907_44ac_a5b4_163975b80454.slice. Dec 16 13:08:01.672597 systemd[1]: Started cri-containerd-7b624f6fcdd7e4e21a12565ddd3e37b91915094913db2a46eb2a2ab477cb5b32.scope - libcontainer container 7b624f6fcdd7e4e21a12565ddd3e37b91915094913db2a46eb2a2ab477cb5b32. Dec 16 13:08:01.684143 kubelet[2726]: I1216 13:08:01.684063 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2d87ecea-0907-44ac-a5b4-163975b80454-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-zjzqt\" (UID: \"2d87ecea-0907-44ac-a5b4-163975b80454\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-zjzqt" Dec 16 13:08:01.684143 kubelet[2726]: I1216 13:08:01.684122 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ck42\" (UniqueName: \"kubernetes.io/projected/2d87ecea-0907-44ac-a5b4-163975b80454-kube-api-access-9ck42\") pod \"tigera-operator-65cdcdfd6d-zjzqt\" (UID: \"2d87ecea-0907-44ac-a5b4-163975b80454\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-zjzqt" Dec 16 13:08:01.827971 containerd[1540]: time="2025-12-16T13:08:01.827814020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dg5vc,Uid:189a21c4-44f8-4870-8401-2c81b118017b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b624f6fcdd7e4e21a12565ddd3e37b91915094913db2a46eb2a2ab477cb5b32\"" Dec 16 13:08:01.831398 kubelet[2726]: E1216 13:08:01.830961 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:01.844222 containerd[1540]: time="2025-12-16T13:08:01.844154774Z" level=info msg="CreateContainer within sandbox \"7b624f6fcdd7e4e21a12565ddd3e37b91915094913db2a46eb2a2ab477cb5b32\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:08:01.871220 containerd[1540]: time="2025-12-16T13:08:01.871108013Z" level=info msg="Container 4fc14c337c56d2665423a70db724bde50bf44454e907968b8118689772625c0e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:01.890537 containerd[1540]: time="2025-12-16T13:08:01.890463104Z" level=info msg="CreateContainer within sandbox \"7b624f6fcdd7e4e21a12565ddd3e37b91915094913db2a46eb2a2ab477cb5b32\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4fc14c337c56d2665423a70db724bde50bf44454e907968b8118689772625c0e\"" Dec 16 13:08:01.893950 containerd[1540]: time="2025-12-16T13:08:01.893652242Z" level=info msg="StartContainer for \"4fc14c337c56d2665423a70db724bde50bf44454e907968b8118689772625c0e\"" Dec 16 13:08:01.897375 containerd[1540]: time="2025-12-16T13:08:01.897244823Z" level=info msg="connecting to shim 4fc14c337c56d2665423a70db724bde50bf44454e907968b8118689772625c0e" address="unix:///run/containerd/s/693a25cfc2f49ba792492e7cfd0a11fe7738c22eeb456282d4b7db8dae9e9b35" protocol=ttrpc version=3 Dec 16 13:08:01.942127 systemd[1]: Started cri-containerd-4fc14c337c56d2665423a70db724bde50bf44454e907968b8118689772625c0e.scope - libcontainer container 4fc14c337c56d2665423a70db724bde50bf44454e907968b8118689772625c0e. Dec 16 13:08:01.947040 containerd[1540]: time="2025-12-16T13:08:01.944866884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-zjzqt,Uid:2d87ecea-0907-44ac-a5b4-163975b80454,Namespace:tigera-operator,Attempt:0,}" Dec 16 13:08:01.999230 containerd[1540]: time="2025-12-16T13:08:01.999163688Z" level=info msg="connecting to shim 7244cbeeb1fe63e0140fe2583dbefa7fee92d761ad82086357e302510a4d49b6" address="unix:///run/containerd/s/e78932cf377934475b7454fdba5b794c21e6a9441ae7985293e45ef629452206" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:02.092908 systemd[1]: Started cri-containerd-7244cbeeb1fe63e0140fe2583dbefa7fee92d761ad82086357e302510a4d49b6.scope - libcontainer container 7244cbeeb1fe63e0140fe2583dbefa7fee92d761ad82086357e302510a4d49b6. Dec 16 13:08:02.116383 containerd[1540]: time="2025-12-16T13:08:02.115682016Z" level=info msg="StartContainer for \"4fc14c337c56d2665423a70db724bde50bf44454e907968b8118689772625c0e\" returns successfully" Dec 16 13:08:02.209050 containerd[1540]: time="2025-12-16T13:08:02.208777378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-zjzqt,Uid:2d87ecea-0907-44ac-a5b4-163975b80454,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7244cbeeb1fe63e0140fe2583dbefa7fee92d761ad82086357e302510a4d49b6\"" Dec 16 13:08:02.215213 containerd[1540]: time="2025-12-16T13:08:02.215125600Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 13:08:02.276117 kubelet[2726]: E1216 13:08:02.276072 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:02.299019 kubelet[2726]: I1216 13:08:02.298756 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dg5vc" podStartSLOduration=1.298728183 podStartE2EDuration="1.298728183s" podCreationTimestamp="2025-12-16 13:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:08:02.297681912 +0000 UTC m=+6.630542070" watchObservedRunningTime="2025-12-16 13:08:02.298728183 +0000 UTC m=+6.631588342" Dec 16 13:08:02.416944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1838360964.mount: Deactivated successfully. Dec 16 13:08:02.573456 kubelet[2726]: E1216 13:08:02.571930 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:03.054448 update_engine[1526]: I20251216 13:08:03.053426 1526 update_attempter.cc:509] Updating boot flags... Dec 16 13:08:03.281857 kubelet[2726]: E1216 13:08:03.281673 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:03.820421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3393002139.mount: Deactivated successfully. Dec 16 13:08:05.300040 kubelet[2726]: E1216 13:08:05.299976 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:06.201223 kubelet[2726]: E1216 13:08:06.201170 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:06.288608 kubelet[2726]: E1216 13:08:06.288550 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:06.289770 kubelet[2726]: E1216 13:08:06.289660 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:07.078989 containerd[1540]: time="2025-12-16T13:08:07.078912464Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:07.080976 containerd[1540]: time="2025-12-16T13:08:07.080626861Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 16 13:08:07.080976 containerd[1540]: time="2025-12-16T13:08:07.080920254Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:07.085027 containerd[1540]: time="2025-12-16T13:08:07.084956597Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:07.086333 containerd[1540]: time="2025-12-16T13:08:07.086195660Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.870703237s" Dec 16 13:08:07.086852 containerd[1540]: time="2025-12-16T13:08:07.086462654Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 13:08:07.104367 containerd[1540]: time="2025-12-16T13:08:07.104273961Z" level=info msg="CreateContainer within sandbox \"7244cbeeb1fe63e0140fe2583dbefa7fee92d761ad82086357e302510a4d49b6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 13:08:07.117423 containerd[1540]: time="2025-12-16T13:08:07.117317868Z" level=info msg="Container 468c9b8844cf805204949021fab6e972f3aea960011b358dea38cb9af690e308: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:07.126466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3126707062.mount: Deactivated successfully. Dec 16 13:08:07.134302 containerd[1540]: time="2025-12-16T13:08:07.134198674Z" level=info msg="CreateContainer within sandbox \"7244cbeeb1fe63e0140fe2583dbefa7fee92d761ad82086357e302510a4d49b6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"468c9b8844cf805204949021fab6e972f3aea960011b358dea38cb9af690e308\"" Dec 16 13:08:07.138238 containerd[1540]: time="2025-12-16T13:08:07.138164873Z" level=info msg="StartContainer for \"468c9b8844cf805204949021fab6e972f3aea960011b358dea38cb9af690e308\"" Dec 16 13:08:07.140839 containerd[1540]: time="2025-12-16T13:08:07.140626271Z" level=info msg="connecting to shim 468c9b8844cf805204949021fab6e972f3aea960011b358dea38cb9af690e308" address="unix:///run/containerd/s/e78932cf377934475b7454fdba5b794c21e6a9441ae7985293e45ef629452206" protocol=ttrpc version=3 Dec 16 13:08:07.191748 systemd[1]: Started cri-containerd-468c9b8844cf805204949021fab6e972f3aea960011b358dea38cb9af690e308.scope - libcontainer container 468c9b8844cf805204949021fab6e972f3aea960011b358dea38cb9af690e308. Dec 16 13:08:07.269320 containerd[1540]: time="2025-12-16T13:08:07.268844551Z" level=info msg="StartContainer for \"468c9b8844cf805204949021fab6e972f3aea960011b358dea38cb9af690e308\" returns successfully" Dec 16 13:08:07.320022 kubelet[2726]: I1216 13:08:07.319694 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-zjzqt" podStartSLOduration=1.436909201 podStartE2EDuration="6.319667922s" podCreationTimestamp="2025-12-16 13:08:01 +0000 UTC" firstStartedPulling="2025-12-16 13:08:02.214047469 +0000 UTC m=+6.546907618" lastFinishedPulling="2025-12-16 13:08:07.096806205 +0000 UTC m=+11.429666339" observedRunningTime="2025-12-16 13:08:07.31946024 +0000 UTC m=+11.652320399" watchObservedRunningTime="2025-12-16 13:08:07.319667922 +0000 UTC m=+11.652528082" Dec 16 13:08:14.959808 sudo[1775]: pam_unix(sudo:session): session closed for user root Dec 16 13:08:14.966205 sshd[1774]: Connection closed by 139.178.68.195 port 51644 Dec 16 13:08:14.967723 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:14.978273 systemd[1]: sshd@6-143.198.151.179:22-139.178.68.195:51644.service: Deactivated successfully. Dec 16 13:08:14.989736 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:08:14.991152 systemd[1]: session-7.scope: Consumed 9.472s CPU time, 167.8M memory peak. Dec 16 13:08:14.995250 systemd-logind[1522]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:08:14.998578 systemd-logind[1522]: Removed session 7. Dec 16 13:08:22.092148 systemd[1]: Created slice kubepods-besteffort-pod41226a3b_e27e_447d_a44b_6d33e48767b7.slice - libcontainer container kubepods-besteffort-pod41226a3b_e27e_447d_a44b_6d33e48767b7.slice. Dec 16 13:08:22.166853 kubelet[2726]: I1216 13:08:22.166705 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41226a3b-e27e-447d-a44b-6d33e48767b7-tigera-ca-bundle\") pod \"calico-typha-5845cdb6bd-4vvhm\" (UID: \"41226a3b-e27e-447d-a44b-6d33e48767b7\") " pod="calico-system/calico-typha-5845cdb6bd-4vvhm" Dec 16 13:08:22.166853 kubelet[2726]: I1216 13:08:22.166761 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/41226a3b-e27e-447d-a44b-6d33e48767b7-typha-certs\") pod \"calico-typha-5845cdb6bd-4vvhm\" (UID: \"41226a3b-e27e-447d-a44b-6d33e48767b7\") " pod="calico-system/calico-typha-5845cdb6bd-4vvhm" Dec 16 13:08:22.166853 kubelet[2726]: I1216 13:08:22.166778 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4mfm\" (UniqueName: \"kubernetes.io/projected/41226a3b-e27e-447d-a44b-6d33e48767b7-kube-api-access-q4mfm\") pod \"calico-typha-5845cdb6bd-4vvhm\" (UID: \"41226a3b-e27e-447d-a44b-6d33e48767b7\") " pod="calico-system/calico-typha-5845cdb6bd-4vvhm" Dec 16 13:08:22.256694 systemd[1]: Created slice kubepods-besteffort-pode25bdbcc_8771_4942_b2a8_f8235a0b83e5.slice - libcontainer container kubepods-besteffort-pode25bdbcc_8771_4942_b2a8_f8235a0b83e5.slice. Dec 16 13:08:22.368273 kubelet[2726]: I1216 13:08:22.367811 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e25bdbcc-8771-4942-b2a8-f8235a0b83e5-node-certs\") pod \"calico-node-lsfj8\" (UID: \"e25bdbcc-8771-4942-b2a8-f8235a0b83e5\") " pod="calico-system/calico-node-lsfj8" Dec 16 13:08:22.368273 kubelet[2726]: I1216 13:08:22.367877 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e25bdbcc-8771-4942-b2a8-f8235a0b83e5-cni-net-dir\") pod \"calico-node-lsfj8\" (UID: \"e25bdbcc-8771-4942-b2a8-f8235a0b83e5\") " pod="calico-system/calico-node-lsfj8" Dec 16 13:08:22.368273 kubelet[2726]: I1216 13:08:22.367909 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e25bdbcc-8771-4942-b2a8-f8235a0b83e5-lib-modules\") pod \"calico-node-lsfj8\" (UID: \"e25bdbcc-8771-4942-b2a8-f8235a0b83e5\") " pod="calico-system/calico-node-lsfj8" Dec 16 13:08:22.368273 kubelet[2726]: I1216 13:08:22.367944 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e25bdbcc-8771-4942-b2a8-f8235a0b83e5-var-lib-calico\") pod \"calico-node-lsfj8\" (UID: \"e25bdbcc-8771-4942-b2a8-f8235a0b83e5\") " pod="calico-system/calico-node-lsfj8" Dec 16 13:08:22.368273 kubelet[2726]: I1216 13:08:22.367973 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e25bdbcc-8771-4942-b2a8-f8235a0b83e5-cni-log-dir\") pod \"calico-node-lsfj8\" (UID: \"e25bdbcc-8771-4942-b2a8-f8235a0b83e5\") " pod="calico-system/calico-node-lsfj8" Dec 16 13:08:22.368576 kubelet[2726]: I1216 13:08:22.368005 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptm9s\" (UniqueName: \"kubernetes.io/projected/e25bdbcc-8771-4942-b2a8-f8235a0b83e5-kube-api-access-ptm9s\") pod \"calico-node-lsfj8\" (UID: \"e25bdbcc-8771-4942-b2a8-f8235a0b83e5\") " pod="calico-system/calico-node-lsfj8" Dec 16 13:08:22.368576 kubelet[2726]: I1216 13:08:22.368033 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e25bdbcc-8771-4942-b2a8-f8235a0b83e5-var-run-calico\") pod \"calico-node-lsfj8\" (UID: \"e25bdbcc-8771-4942-b2a8-f8235a0b83e5\") " pod="calico-system/calico-node-lsfj8" Dec 16 13:08:22.368576 kubelet[2726]: I1216 13:08:22.368055 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e25bdbcc-8771-4942-b2a8-f8235a0b83e5-xtables-lock\") pod \"calico-node-lsfj8\" (UID: \"e25bdbcc-8771-4942-b2a8-f8235a0b83e5\") " pod="calico-system/calico-node-lsfj8" Dec 16 13:08:22.368576 kubelet[2726]: I1216 13:08:22.368078 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e25bdbcc-8771-4942-b2a8-f8235a0b83e5-cni-bin-dir\") pod \"calico-node-lsfj8\" (UID: \"e25bdbcc-8771-4942-b2a8-f8235a0b83e5\") " pod="calico-system/calico-node-lsfj8" Dec 16 13:08:22.368576 kubelet[2726]: I1216 13:08:22.368104 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e25bdbcc-8771-4942-b2a8-f8235a0b83e5-flexvol-driver-host\") pod \"calico-node-lsfj8\" (UID: \"e25bdbcc-8771-4942-b2a8-f8235a0b83e5\") " pod="calico-system/calico-node-lsfj8" Dec 16 13:08:22.368737 kubelet[2726]: I1216 13:08:22.368120 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e25bdbcc-8771-4942-b2a8-f8235a0b83e5-policysync\") pod \"calico-node-lsfj8\" (UID: \"e25bdbcc-8771-4942-b2a8-f8235a0b83e5\") " pod="calico-system/calico-node-lsfj8" Dec 16 13:08:22.368737 kubelet[2726]: I1216 13:08:22.368137 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e25bdbcc-8771-4942-b2a8-f8235a0b83e5-tigera-ca-bundle\") pod \"calico-node-lsfj8\" (UID: \"e25bdbcc-8771-4942-b2a8-f8235a0b83e5\") " pod="calico-system/calico-node-lsfj8" Dec 16 13:08:22.384339 kubelet[2726]: E1216 13:08:22.383819 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zqzhv" podUID="5092d504-cc04-4db5-bde7-b900923744da" Dec 16 13:08:22.408158 kubelet[2726]: E1216 13:08:22.407619 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:22.408726 containerd[1540]: time="2025-12-16T13:08:22.408665209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5845cdb6bd-4vvhm,Uid:41226a3b-e27e-447d-a44b-6d33e48767b7,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:22.450520 containerd[1540]: time="2025-12-16T13:08:22.450445714Z" level=info msg="connecting to shim 2ae84e142a511ba6b129e06e59809c4bb04508b164399abb2af106d4403cba38" address="unix:///run/containerd/s/e32aea3219a9db418cb452cdd87be42c73de9f851d42fa1755b771dd21697324" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:22.471407 kubelet[2726]: I1216 13:08:22.470402 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5092d504-cc04-4db5-bde7-b900923744da-socket-dir\") pod \"csi-node-driver-zqzhv\" (UID: \"5092d504-cc04-4db5-bde7-b900923744da\") " pod="calico-system/csi-node-driver-zqzhv" Dec 16 13:08:22.471407 kubelet[2726]: I1216 13:08:22.470533 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5092d504-cc04-4db5-bde7-b900923744da-kubelet-dir\") pod \"csi-node-driver-zqzhv\" (UID: \"5092d504-cc04-4db5-bde7-b900923744da\") " pod="calico-system/csi-node-driver-zqzhv" Dec 16 13:08:22.471407 kubelet[2726]: I1216 13:08:22.470580 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5092d504-cc04-4db5-bde7-b900923744da-varrun\") pod \"csi-node-driver-zqzhv\" (UID: \"5092d504-cc04-4db5-bde7-b900923744da\") " pod="calico-system/csi-node-driver-zqzhv" Dec 16 13:08:22.471407 kubelet[2726]: I1216 13:08:22.470639 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvcq8\" (UniqueName: \"kubernetes.io/projected/5092d504-cc04-4db5-bde7-b900923744da-kube-api-access-gvcq8\") pod \"csi-node-driver-zqzhv\" (UID: \"5092d504-cc04-4db5-bde7-b900923744da\") " pod="calico-system/csi-node-driver-zqzhv" Dec 16 13:08:22.477388 kubelet[2726]: I1216 13:08:22.472401 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5092d504-cc04-4db5-bde7-b900923744da-registration-dir\") pod \"csi-node-driver-zqzhv\" (UID: \"5092d504-cc04-4db5-bde7-b900923744da\") " pod="calico-system/csi-node-driver-zqzhv" Dec 16 13:08:22.488423 kubelet[2726]: E1216 13:08:22.484835 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.488423 kubelet[2726]: W1216 13:08:22.485075 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.488423 kubelet[2726]: E1216 13:08:22.487738 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.495443 kubelet[2726]: E1216 13:08:22.494029 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.495443 kubelet[2726]: W1216 13:08:22.494177 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.495443 kubelet[2726]: E1216 13:08:22.494210 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.502709 kubelet[2726]: E1216 13:08:22.502663 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.502709 kubelet[2726]: W1216 13:08:22.502703 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.503497 kubelet[2726]: E1216 13:08:22.502743 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.505835 kubelet[2726]: E1216 13:08:22.505792 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.505835 kubelet[2726]: W1216 13:08:22.505831 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.507273 kubelet[2726]: E1216 13:08:22.507211 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.510387 kubelet[2726]: E1216 13:08:22.509228 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.510387 kubelet[2726]: W1216 13:08:22.509267 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.510387 kubelet[2726]: E1216 13:08:22.509299 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.515051 kubelet[2726]: E1216 13:08:22.512384 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.515051 kubelet[2726]: W1216 13:08:22.512427 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.515051 kubelet[2726]: E1216 13:08:22.512465 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.515051 kubelet[2726]: E1216 13:08:22.514054 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.515051 kubelet[2726]: W1216 13:08:22.514723 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.515051 kubelet[2726]: E1216 13:08:22.514760 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.517943 kubelet[2726]: E1216 13:08:22.517665 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.517943 kubelet[2726]: W1216 13:08:22.517762 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.518534 kubelet[2726]: E1216 13:08:22.518401 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.522204 kubelet[2726]: E1216 13:08:22.521329 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.522204 kubelet[2726]: W1216 13:08:22.521502 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.522204 kubelet[2726]: E1216 13:08:22.521684 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.523550 kubelet[2726]: E1216 13:08:22.523504 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.523550 kubelet[2726]: W1216 13:08:22.523538 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.523740 kubelet[2726]: E1216 13:08:22.523570 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.528642 kubelet[2726]: E1216 13:08:22.525341 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.528642 kubelet[2726]: W1216 13:08:22.525519 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.528642 kubelet[2726]: E1216 13:08:22.525648 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.528642 kubelet[2726]: E1216 13:08:22.525935 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.528642 kubelet[2726]: W1216 13:08:22.525945 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.528642 kubelet[2726]: E1216 13:08:22.525984 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.528642 kubelet[2726]: E1216 13:08:22.527584 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.528642 kubelet[2726]: W1216 13:08:22.527603 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.528642 kubelet[2726]: E1216 13:08:22.527623 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.528642 kubelet[2726]: E1216 13:08:22.528651 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.529009 kubelet[2726]: W1216 13:08:22.528662 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.529009 kubelet[2726]: E1216 13:08:22.528700 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.529009 kubelet[2726]: E1216 13:08:22.528895 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.529009 kubelet[2726]: W1216 13:08:22.528948 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.529009 kubelet[2726]: E1216 13:08:22.528962 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.530381 kubelet[2726]: E1216 13:08:22.530050 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.530381 kubelet[2726]: W1216 13:08:22.530073 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.530381 kubelet[2726]: E1216 13:08:22.530086 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.530381 kubelet[2726]: E1216 13:08:22.530331 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.530381 kubelet[2726]: W1216 13:08:22.530339 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.530381 kubelet[2726]: E1216 13:08:22.530373 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.530628 kubelet[2726]: E1216 13:08:22.530533 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.530628 kubelet[2726]: W1216 13:08:22.530540 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.530628 kubelet[2726]: E1216 13:08:22.530547 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.531623 kubelet[2726]: E1216 13:08:22.530773 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.531623 kubelet[2726]: W1216 13:08:22.530785 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.531623 kubelet[2726]: E1216 13:08:22.530793 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.533620 systemd[1]: Started cri-containerd-2ae84e142a511ba6b129e06e59809c4bb04508b164399abb2af106d4403cba38.scope - libcontainer container 2ae84e142a511ba6b129e06e59809c4bb04508b164399abb2af106d4403cba38. Dec 16 13:08:22.574903 kubelet[2726]: E1216 13:08:22.574853 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.574903 kubelet[2726]: W1216 13:08:22.574913 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.575443 kubelet[2726]: E1216 13:08:22.574940 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.577554 kubelet[2726]: E1216 13:08:22.577506 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:22.579382 kubelet[2726]: E1216 13:08:22.577866 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.579835 kubelet[2726]: W1216 13:08:22.579668 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.579835 kubelet[2726]: E1216 13:08:22.579708 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.579944 containerd[1540]: time="2025-12-16T13:08:22.579763458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lsfj8,Uid:e25bdbcc-8771-4942-b2a8-f8235a0b83e5,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:22.580341 kubelet[2726]: E1216 13:08:22.580320 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.580473 kubelet[2726]: W1216 13:08:22.580457 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.580533 kubelet[2726]: E1216 13:08:22.580523 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.580794 kubelet[2726]: E1216 13:08:22.580781 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.580889 kubelet[2726]: W1216 13:08:22.580854 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.580889 kubelet[2726]: E1216 13:08:22.580869 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.581741 kubelet[2726]: E1216 13:08:22.581723 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.581871 kubelet[2726]: W1216 13:08:22.581802 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.581871 kubelet[2726]: E1216 13:08:22.581820 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.582592 kubelet[2726]: E1216 13:08:22.582500 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.582592 kubelet[2726]: W1216 13:08:22.582515 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.582592 kubelet[2726]: E1216 13:08:22.582528 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.582986 kubelet[2726]: E1216 13:08:22.582916 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.582986 kubelet[2726]: W1216 13:08:22.582928 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.582986 kubelet[2726]: E1216 13:08:22.582940 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.583566 kubelet[2726]: E1216 13:08:22.583551 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.583952 kubelet[2726]: W1216 13:08:22.583636 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.583952 kubelet[2726]: E1216 13:08:22.583651 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.584285 kubelet[2726]: E1216 13:08:22.584214 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.584285 kubelet[2726]: W1216 13:08:22.584227 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.584285 kubelet[2726]: E1216 13:08:22.584239 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.585546 kubelet[2726]: E1216 13:08:22.585448 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.585546 kubelet[2726]: W1216 13:08:22.585463 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.585546 kubelet[2726]: E1216 13:08:22.585478 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.585939 kubelet[2726]: E1216 13:08:22.585853 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.585939 kubelet[2726]: W1216 13:08:22.585867 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.585939 kubelet[2726]: E1216 13:08:22.585881 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.586219 kubelet[2726]: E1216 13:08:22.586207 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.586421 kubelet[2726]: W1216 13:08:22.586268 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.586421 kubelet[2726]: E1216 13:08:22.586284 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.587360 kubelet[2726]: E1216 13:08:22.587316 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.587360 kubelet[2726]: W1216 13:08:22.587335 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.587573 kubelet[2726]: E1216 13:08:22.587481 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.587724 kubelet[2726]: E1216 13:08:22.587715 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.587770 kubelet[2726]: W1216 13:08:22.587762 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.587842 kubelet[2726]: E1216 13:08:22.587808 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.588226 kubelet[2726]: E1216 13:08:22.588152 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.588226 kubelet[2726]: W1216 13:08:22.588165 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.588226 kubelet[2726]: E1216 13:08:22.588175 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.588941 kubelet[2726]: E1216 13:08:22.588881 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.588941 kubelet[2726]: W1216 13:08:22.588894 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.588941 kubelet[2726]: E1216 13:08:22.588923 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.589403 kubelet[2726]: E1216 13:08:22.589358 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.589403 kubelet[2726]: W1216 13:08:22.589380 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.589403 kubelet[2726]: E1216 13:08:22.589391 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.590090 kubelet[2726]: E1216 13:08:22.589998 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.590090 kubelet[2726]: W1216 13:08:22.590013 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.590413 kubelet[2726]: E1216 13:08:22.590283 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.591032 kubelet[2726]: E1216 13:08:22.591016 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.591408 kubelet[2726]: W1216 13:08:22.591169 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.591408 kubelet[2726]: E1216 13:08:22.591188 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.592046 kubelet[2726]: E1216 13:08:22.591971 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.592046 kubelet[2726]: W1216 13:08:22.591986 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.592046 kubelet[2726]: E1216 13:08:22.591997 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.593276 kubelet[2726]: E1216 13:08:22.593068 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.593276 kubelet[2726]: W1216 13:08:22.593087 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.593276 kubelet[2726]: E1216 13:08:22.593105 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.594066 kubelet[2726]: E1216 13:08:22.594047 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.594468 kubelet[2726]: W1216 13:08:22.594333 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.594468 kubelet[2726]: E1216 13:08:22.594407 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.595180 kubelet[2726]: E1216 13:08:22.594987 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.595180 kubelet[2726]: W1216 13:08:22.595096 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.595180 kubelet[2726]: E1216 13:08:22.595116 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.596405 kubelet[2726]: E1216 13:08:22.595938 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.596405 kubelet[2726]: W1216 13:08:22.595956 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.596405 kubelet[2726]: E1216 13:08:22.595973 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.596682 kubelet[2726]: E1216 13:08:22.596669 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.596736 kubelet[2726]: W1216 13:08:22.596727 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.596786 kubelet[2726]: E1216 13:08:22.596776 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.623162 containerd[1540]: time="2025-12-16T13:08:22.621741614Z" level=info msg="connecting to shim cae460e2b5adbf9aea59528e512148e24629537229b639593c8dd47c6caa00e2" address="unix:///run/containerd/s/efa919ef76244a4d542f9fb3a7e7449cff0dfc2ea2579d2b9bb5c181e82d32e2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:22.628849 kubelet[2726]: E1216 13:08:22.628257 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:22.628849 kubelet[2726]: W1216 13:08:22.628454 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:22.628849 kubelet[2726]: E1216 13:08:22.628612 2726 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:22.688848 systemd[1]: Started cri-containerd-cae460e2b5adbf9aea59528e512148e24629537229b639593c8dd47c6caa00e2.scope - libcontainer container cae460e2b5adbf9aea59528e512148e24629537229b639593c8dd47c6caa00e2. Dec 16 13:08:22.812858 containerd[1540]: time="2025-12-16T13:08:22.812788646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lsfj8,Uid:e25bdbcc-8771-4942-b2a8-f8235a0b83e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"cae460e2b5adbf9aea59528e512148e24629537229b639593c8dd47c6caa00e2\"" Dec 16 13:08:22.814376 kubelet[2726]: E1216 13:08:22.814289 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:22.820430 containerd[1540]: time="2025-12-16T13:08:22.820316507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 13:08:22.890682 containerd[1540]: time="2025-12-16T13:08:22.888818118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5845cdb6bd-4vvhm,Uid:41226a3b-e27e-447d-a44b-6d33e48767b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"2ae84e142a511ba6b129e06e59809c4bb04508b164399abb2af106d4403cba38\"" Dec 16 13:08:22.892374 kubelet[2726]: E1216 13:08:22.891864 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:24.142664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3443745300.mount: Deactivated successfully. Dec 16 13:08:24.170371 kubelet[2726]: E1216 13:08:24.170291 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zqzhv" podUID="5092d504-cc04-4db5-bde7-b900923744da" Dec 16 13:08:24.284423 containerd[1540]: time="2025-12-16T13:08:24.283557618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:24.285315 containerd[1540]: time="2025-12-16T13:08:24.284855426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Dec 16 13:08:24.286335 containerd[1540]: time="2025-12-16T13:08:24.285861622Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:24.290807 containerd[1540]: time="2025-12-16T13:08:24.290739175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:24.291745 containerd[1540]: time="2025-12-16T13:08:24.291684563Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.47124768s" Dec 16 13:08:24.291745 containerd[1540]: time="2025-12-16T13:08:24.291744842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 13:08:24.294737 containerd[1540]: time="2025-12-16T13:08:24.294675666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 13:08:24.303773 containerd[1540]: time="2025-12-16T13:08:24.303601306Z" level=info msg="CreateContainer within sandbox \"cae460e2b5adbf9aea59528e512148e24629537229b639593c8dd47c6caa00e2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 13:08:24.321707 containerd[1540]: time="2025-12-16T13:08:24.319574289Z" level=info msg="Container d61955e65342fbb1127372cd9cd74d1e37ed74f66ef4bf49b6416e3eb78a1721: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:24.338808 containerd[1540]: time="2025-12-16T13:08:24.338736477Z" level=info msg="CreateContainer within sandbox \"cae460e2b5adbf9aea59528e512148e24629537229b639593c8dd47c6caa00e2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d61955e65342fbb1127372cd9cd74d1e37ed74f66ef4bf49b6416e3eb78a1721\"" Dec 16 13:08:24.341219 containerd[1540]: time="2025-12-16T13:08:24.341164449Z" level=info msg="StartContainer for \"d61955e65342fbb1127372cd9cd74d1e37ed74f66ef4bf49b6416e3eb78a1721\"" Dec 16 13:08:24.344699 containerd[1540]: time="2025-12-16T13:08:24.344611855Z" level=info msg="connecting to shim d61955e65342fbb1127372cd9cd74d1e37ed74f66ef4bf49b6416e3eb78a1721" address="unix:///run/containerd/s/efa919ef76244a4d542f9fb3a7e7449cff0dfc2ea2579d2b9bb5c181e82d32e2" protocol=ttrpc version=3 Dec 16 13:08:24.389989 systemd[1]: Started cri-containerd-d61955e65342fbb1127372cd9cd74d1e37ed74f66ef4bf49b6416e3eb78a1721.scope - libcontainer container d61955e65342fbb1127372cd9cd74d1e37ed74f66ef4bf49b6416e3eb78a1721. Dec 16 13:08:24.474499 containerd[1540]: time="2025-12-16T13:08:24.474258347Z" level=info msg="StartContainer for \"d61955e65342fbb1127372cd9cd74d1e37ed74f66ef4bf49b6416e3eb78a1721\" returns successfully" Dec 16 13:08:24.499778 systemd[1]: cri-containerd-d61955e65342fbb1127372cd9cd74d1e37ed74f66ef4bf49b6416e3eb78a1721.scope: Deactivated successfully. Dec 16 13:08:24.509257 containerd[1540]: time="2025-12-16T13:08:24.509134701Z" level=info msg="received container exit event container_id:\"d61955e65342fbb1127372cd9cd74d1e37ed74f66ef4bf49b6416e3eb78a1721\" id:\"d61955e65342fbb1127372cd9cd74d1e37ed74f66ef4bf49b6416e3eb78a1721\" pid:3313 exited_at:{seconds:1765890504 nanos:508462938}" Dec 16 13:08:24.556946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d61955e65342fbb1127372cd9cd74d1e37ed74f66ef4bf49b6416e3eb78a1721-rootfs.mount: Deactivated successfully. Dec 16 13:08:25.378866 kubelet[2726]: E1216 13:08:25.378807 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:26.169529 kubelet[2726]: E1216 13:08:26.169097 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zqzhv" podUID="5092d504-cc04-4db5-bde7-b900923744da" Dec 16 13:08:26.715757 containerd[1540]: time="2025-12-16T13:08:26.714582854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:26.717580 containerd[1540]: time="2025-12-16T13:08:26.717215998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Dec 16 13:08:26.718185 containerd[1540]: time="2025-12-16T13:08:26.718143031Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:26.721531 containerd[1540]: time="2025-12-16T13:08:26.721445864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:26.723832 containerd[1540]: time="2025-12-16T13:08:26.723659833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.428928708s" Dec 16 13:08:26.724220 containerd[1540]: time="2025-12-16T13:08:26.724185858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 13:08:26.733037 containerd[1540]: time="2025-12-16T13:08:26.731831552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 13:08:26.767018 containerd[1540]: time="2025-12-16T13:08:26.766952134Z" level=info msg="CreateContainer within sandbox \"2ae84e142a511ba6b129e06e59809c4bb04508b164399abb2af106d4403cba38\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 13:08:26.776633 containerd[1540]: time="2025-12-16T13:08:26.776573350Z" level=info msg="Container efeb2d0476ae91b0291ca07880c9e43d6a6acf8805c18d044a0d8b5cb8907dbe: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:26.789278 containerd[1540]: time="2025-12-16T13:08:26.789223576Z" level=info msg="CreateContainer within sandbox \"2ae84e142a511ba6b129e06e59809c4bb04508b164399abb2af106d4403cba38\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"efeb2d0476ae91b0291ca07880c9e43d6a6acf8805c18d044a0d8b5cb8907dbe\"" Dec 16 13:08:26.792918 containerd[1540]: time="2025-12-16T13:08:26.792863231Z" level=info msg="StartContainer for \"efeb2d0476ae91b0291ca07880c9e43d6a6acf8805c18d044a0d8b5cb8907dbe\"" Dec 16 13:08:26.796127 containerd[1540]: time="2025-12-16T13:08:26.796069764Z" level=info msg="connecting to shim efeb2d0476ae91b0291ca07880c9e43d6a6acf8805c18d044a0d8b5cb8907dbe" address="unix:///run/containerd/s/e32aea3219a9db418cb452cdd87be42c73de9f851d42fa1755b771dd21697324" protocol=ttrpc version=3 Dec 16 13:08:26.838706 systemd[1]: Started cri-containerd-efeb2d0476ae91b0291ca07880c9e43d6a6acf8805c18d044a0d8b5cb8907dbe.scope - libcontainer container efeb2d0476ae91b0291ca07880c9e43d6a6acf8805c18d044a0d8b5cb8907dbe. Dec 16 13:08:26.925481 containerd[1540]: time="2025-12-16T13:08:26.925280660Z" level=info msg="StartContainer for \"efeb2d0476ae91b0291ca07880c9e43d6a6acf8805c18d044a0d8b5cb8907dbe\" returns successfully" Dec 16 13:08:27.393742 kubelet[2726]: E1216 13:08:27.393687 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:27.442010 kubelet[2726]: I1216 13:08:27.438720 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5845cdb6bd-4vvhm" podStartSLOduration=1.605814338 podStartE2EDuration="5.438694577s" podCreationTimestamp="2025-12-16 13:08:22 +0000 UTC" firstStartedPulling="2025-12-16 13:08:22.893295006 +0000 UTC m=+27.226155136" lastFinishedPulling="2025-12-16 13:08:26.726175221 +0000 UTC m=+31.059035375" observedRunningTime="2025-12-16 13:08:27.438581098 +0000 UTC m=+31.771441282" watchObservedRunningTime="2025-12-16 13:08:27.438694577 +0000 UTC m=+31.771554739" Dec 16 13:08:28.171212 kubelet[2726]: E1216 13:08:28.169757 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zqzhv" podUID="5092d504-cc04-4db5-bde7-b900923744da" Dec 16 13:08:28.395744 kubelet[2726]: I1216 13:08:28.395692 2726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:08:28.396295 kubelet[2726]: E1216 13:08:28.396276 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:30.168525 kubelet[2726]: E1216 13:08:30.168460 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zqzhv" podUID="5092d504-cc04-4db5-bde7-b900923744da" Dec 16 13:08:31.599265 containerd[1540]: time="2025-12-16T13:08:31.599196843Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:31.600494 containerd[1540]: time="2025-12-16T13:08:31.600439055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 16 13:08:31.602695 containerd[1540]: time="2025-12-16T13:08:31.602639619Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:31.603955 containerd[1540]: time="2025-12-16T13:08:31.603883294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:31.605171 containerd[1540]: time="2025-12-16T13:08:31.604988794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.873092091s" Dec 16 13:08:31.605171 containerd[1540]: time="2025-12-16T13:08:31.605046249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 13:08:31.617793 containerd[1540]: time="2025-12-16T13:08:31.617309497Z" level=info msg="CreateContainer within sandbox \"cae460e2b5adbf9aea59528e512148e24629537229b639593c8dd47c6caa00e2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 13:08:31.643807 containerd[1540]: time="2025-12-16T13:08:31.643735555Z" level=info msg="Container 9d9a0e5c43115a5863aa648fa574983b9610594788c1258f89a0e300f4e874c2: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:31.654295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2814010038.mount: Deactivated successfully. Dec 16 13:08:31.690842 containerd[1540]: time="2025-12-16T13:08:31.690683430Z" level=info msg="CreateContainer within sandbox \"cae460e2b5adbf9aea59528e512148e24629537229b639593c8dd47c6caa00e2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9d9a0e5c43115a5863aa648fa574983b9610594788c1258f89a0e300f4e874c2\"" Dec 16 13:08:31.693386 containerd[1540]: time="2025-12-16T13:08:31.691383152Z" level=info msg="StartContainer for \"9d9a0e5c43115a5863aa648fa574983b9610594788c1258f89a0e300f4e874c2\"" Dec 16 13:08:31.696064 containerd[1540]: time="2025-12-16T13:08:31.696009201Z" level=info msg="connecting to shim 9d9a0e5c43115a5863aa648fa574983b9610594788c1258f89a0e300f4e874c2" address="unix:///run/containerd/s/efa919ef76244a4d542f9fb3a7e7449cff0dfc2ea2579d2b9bb5c181e82d32e2" protocol=ttrpc version=3 Dec 16 13:08:31.739688 systemd[1]: Started cri-containerd-9d9a0e5c43115a5863aa648fa574983b9610594788c1258f89a0e300f4e874c2.scope - libcontainer container 9d9a0e5c43115a5863aa648fa574983b9610594788c1258f89a0e300f4e874c2. Dec 16 13:08:31.851601 containerd[1540]: time="2025-12-16T13:08:31.850514648Z" level=info msg="StartContainer for \"9d9a0e5c43115a5863aa648fa574983b9610594788c1258f89a0e300f4e874c2\" returns successfully" Dec 16 13:08:32.003582 kubelet[2726]: I1216 13:08:32.003133 2726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:08:32.006228 kubelet[2726]: E1216 13:08:32.006069 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:32.170006 kubelet[2726]: E1216 13:08:32.169783 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zqzhv" podUID="5092d504-cc04-4db5-bde7-b900923744da" Dec 16 13:08:32.479259 kubelet[2726]: E1216 13:08:32.479131 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:32.479259 kubelet[2726]: E1216 13:08:32.479136 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:32.732626 systemd[1]: cri-containerd-9d9a0e5c43115a5863aa648fa574983b9610594788c1258f89a0e300f4e874c2.scope: Deactivated successfully. Dec 16 13:08:32.734986 systemd[1]: cri-containerd-9d9a0e5c43115a5863aa648fa574983b9610594788c1258f89a0e300f4e874c2.scope: Consumed 715ms CPU time, 165.5M memory peak, 15.5M read from disk, 171.3M written to disk. Dec 16 13:08:32.768544 containerd[1540]: time="2025-12-16T13:08:32.768055282Z" level=info msg="received container exit event container_id:\"9d9a0e5c43115a5863aa648fa574983b9610594788c1258f89a0e300f4e874c2\" id:\"9d9a0e5c43115a5863aa648fa574983b9610594788c1258f89a0e300f4e874c2\" pid:3412 exited_at:{seconds:1765890512 nanos:746330451}" Dec 16 13:08:32.830129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d9a0e5c43115a5863aa648fa574983b9610594788c1258f89a0e300f4e874c2-rootfs.mount: Deactivated successfully. Dec 16 13:08:32.846654 kubelet[2726]: I1216 13:08:32.831865 2726 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 13:08:32.923267 systemd[1]: Created slice kubepods-burstable-podede2e643_66a7_48be_bdf2_068efa6cf822.slice - libcontainer container kubepods-burstable-podede2e643_66a7_48be_bdf2_068efa6cf822.slice. Dec 16 13:08:32.953324 systemd[1]: Created slice kubepods-burstable-poded9a1f64_2701_467a_bb1c_5afefbeb30b1.slice - libcontainer container kubepods-burstable-poded9a1f64_2701_467a_bb1c_5afefbeb30b1.slice. Dec 16 13:08:32.974470 systemd[1]: Created slice kubepods-besteffort-pod9e067442_1617_4c9f_a618_5f4c28d671bd.slice - libcontainer container kubepods-besteffort-pod9e067442_1617_4c9f_a618_5f4c28d671bd.slice. Dec 16 13:08:32.986978 kubelet[2726]: I1216 13:08:32.985770 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ede2e643-66a7-48be-bdf2-068efa6cf822-config-volume\") pod \"coredns-66bc5c9577-pz4l6\" (UID: \"ede2e643-66a7-48be-bdf2-068efa6cf822\") " pod="kube-system/coredns-66bc5c9577-pz4l6" Dec 16 13:08:32.987224 kubelet[2726]: I1216 13:08:32.987047 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e067442-1617-4c9f-a618-5f4c28d671bd-tigera-ca-bundle\") pod \"calico-kube-controllers-9f569d77f-ndtwq\" (UID: \"9e067442-1617-4c9f-a618-5f4c28d671bd\") " pod="calico-system/calico-kube-controllers-9f569d77f-ndtwq" Dec 16 13:08:32.987359 kubelet[2726]: I1216 13:08:32.987299 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sjxr\" (UniqueName: \"kubernetes.io/projected/9e067442-1617-4c9f-a618-5f4c28d671bd-kube-api-access-5sjxr\") pod \"calico-kube-controllers-9f569d77f-ndtwq\" (UID: \"9e067442-1617-4c9f-a618-5f4c28d671bd\") " pod="calico-system/calico-kube-controllers-9f569d77f-ndtwq" Dec 16 13:08:32.988546 kubelet[2726]: I1216 13:08:32.988476 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tlfw\" (UniqueName: \"kubernetes.io/projected/ede2e643-66a7-48be-bdf2-068efa6cf822-kube-api-access-2tlfw\") pod \"coredns-66bc5c9577-pz4l6\" (UID: \"ede2e643-66a7-48be-bdf2-068efa6cf822\") " pod="kube-system/coredns-66bc5c9577-pz4l6" Dec 16 13:08:32.988806 kubelet[2726]: I1216 13:08:32.988680 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed9a1f64-2701-467a-bb1c-5afefbeb30b1-config-volume\") pod \"coredns-66bc5c9577-2ktzm\" (UID: \"ed9a1f64-2701-467a-bb1c-5afefbeb30b1\") " pod="kube-system/coredns-66bc5c9577-2ktzm" Dec 16 13:08:32.988806 kubelet[2726]: I1216 13:08:32.988712 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3c97abc9-28b5-46fd-ac48-9268ba05dd67-calico-apiserver-certs\") pod \"calico-apiserver-5d49c7467-qxkcz\" (UID: \"3c97abc9-28b5-46fd-ac48-9268ba05dd67\") " pod="calico-apiserver/calico-apiserver-5d49c7467-qxkcz" Dec 16 13:08:32.989112 kubelet[2726]: I1216 13:08:32.988960 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgc29\" (UniqueName: \"kubernetes.io/projected/ed9a1f64-2701-467a-bb1c-5afefbeb30b1-kube-api-access-vgc29\") pod \"coredns-66bc5c9577-2ktzm\" (UID: \"ed9a1f64-2701-467a-bb1c-5afefbeb30b1\") " pod="kube-system/coredns-66bc5c9577-2ktzm" Dec 16 13:08:32.989336 kubelet[2726]: I1216 13:08:32.989177 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm6bf\" (UniqueName: \"kubernetes.io/projected/3c97abc9-28b5-46fd-ac48-9268ba05dd67-kube-api-access-zm6bf\") pod \"calico-apiserver-5d49c7467-qxkcz\" (UID: \"3c97abc9-28b5-46fd-ac48-9268ba05dd67\") " pod="calico-apiserver/calico-apiserver-5d49c7467-qxkcz" Dec 16 13:08:32.990155 systemd[1]: Created slice kubepods-besteffort-pod3c97abc9_28b5_46fd_ac48_9268ba05dd67.slice - libcontainer container kubepods-besteffort-pod3c97abc9_28b5_46fd_ac48_9268ba05dd67.slice. Dec 16 13:08:33.001574 systemd[1]: Created slice kubepods-besteffort-podc5c3da84_f0c0_494e_b5ab_338a3db3dbfc.slice - libcontainer container kubepods-besteffort-podc5c3da84_f0c0_494e_b5ab_338a3db3dbfc.slice. Dec 16 13:08:33.017688 systemd[1]: Created slice kubepods-besteffort-pod6c535668_f4bd_4af5_9cdf_87c693c12696.slice - libcontainer container kubepods-besteffort-pod6c535668_f4bd_4af5_9cdf_87c693c12696.slice. Dec 16 13:08:33.028502 systemd[1]: Created slice kubepods-besteffort-pod4a2f976f_c74a_49b6_b4f9_45981d88e7c0.slice - libcontainer container kubepods-besteffort-pod4a2f976f_c74a_49b6_b4f9_45981d88e7c0.slice. Dec 16 13:08:33.089740 kubelet[2726]: I1216 13:08:33.089675 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a2f976f-c74a-49b6-b4f9-45981d88e7c0-whisker-ca-bundle\") pod \"whisker-6ccf744969-phnb6\" (UID: \"4a2f976f-c74a-49b6-b4f9-45981d88e7c0\") " pod="calico-system/whisker-6ccf744969-phnb6" Dec 16 13:08:33.091300 kubelet[2726]: I1216 13:08:33.091238 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6c535668-f4bd-4af5-9cdf-87c693c12696-goldmane-key-pair\") pod \"goldmane-7c778bb748-tgf4g\" (UID: \"6c535668-f4bd-4af5-9cdf-87c693c12696\") " pod="calico-system/goldmane-7c778bb748-tgf4g" Dec 16 13:08:33.091413 kubelet[2726]: I1216 13:08:33.091369 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncrwp\" (UniqueName: \"kubernetes.io/projected/4a2f976f-c74a-49b6-b4f9-45981d88e7c0-kube-api-access-ncrwp\") pod \"whisker-6ccf744969-phnb6\" (UID: \"4a2f976f-c74a-49b6-b4f9-45981d88e7c0\") " pod="calico-system/whisker-6ccf744969-phnb6" Dec 16 13:08:33.091451 kubelet[2726]: I1216 13:08:33.091439 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c535668-f4bd-4af5-9cdf-87c693c12696-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-tgf4g\" (UID: \"6c535668-f4bd-4af5-9cdf-87c693c12696\") " pod="calico-system/goldmane-7c778bb748-tgf4g" Dec 16 13:08:33.091648 kubelet[2726]: I1216 13:08:33.091472 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4ql7\" (UniqueName: \"kubernetes.io/projected/c5c3da84-f0c0-494e-b5ab-338a3db3dbfc-kube-api-access-z4ql7\") pod \"calico-apiserver-5d49c7467-w7dmv\" (UID: \"c5c3da84-f0c0-494e-b5ab-338a3db3dbfc\") " pod="calico-apiserver/calico-apiserver-5d49c7467-w7dmv" Dec 16 13:08:33.091648 kubelet[2726]: I1216 13:08:33.091510 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c535668-f4bd-4af5-9cdf-87c693c12696-config\") pod \"goldmane-7c778bb748-tgf4g\" (UID: \"6c535668-f4bd-4af5-9cdf-87c693c12696\") " pod="calico-system/goldmane-7c778bb748-tgf4g" Dec 16 13:08:33.091648 kubelet[2726]: I1216 13:08:33.091532 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9ltx\" (UniqueName: \"kubernetes.io/projected/6c535668-f4bd-4af5-9cdf-87c693c12696-kube-api-access-s9ltx\") pod \"goldmane-7c778bb748-tgf4g\" (UID: \"6c535668-f4bd-4af5-9cdf-87c693c12696\") " pod="calico-system/goldmane-7c778bb748-tgf4g" Dec 16 13:08:33.091648 kubelet[2726]: I1216 13:08:33.091561 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c5c3da84-f0c0-494e-b5ab-338a3db3dbfc-calico-apiserver-certs\") pod \"calico-apiserver-5d49c7467-w7dmv\" (UID: \"c5c3da84-f0c0-494e-b5ab-338a3db3dbfc\") " pod="calico-apiserver/calico-apiserver-5d49c7467-w7dmv" Dec 16 13:08:33.091648 kubelet[2726]: I1216 13:08:33.091594 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4a2f976f-c74a-49b6-b4f9-45981d88e7c0-whisker-backend-key-pair\") pod \"whisker-6ccf744969-phnb6\" (UID: \"4a2f976f-c74a-49b6-b4f9-45981d88e7c0\") " pod="calico-system/whisker-6ccf744969-phnb6" Dec 16 13:08:33.251568 kubelet[2726]: E1216 13:08:33.251380 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:33.253266 containerd[1540]: time="2025-12-16T13:08:33.253198203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pz4l6,Uid:ede2e643-66a7-48be-bdf2-068efa6cf822,Namespace:kube-system,Attempt:0,}" Dec 16 13:08:33.267689 kubelet[2726]: E1216 13:08:33.267599 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:33.273180 containerd[1540]: time="2025-12-16T13:08:33.273132863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2ktzm,Uid:ed9a1f64-2701-467a-bb1c-5afefbeb30b1,Namespace:kube-system,Attempt:0,}" Dec 16 13:08:33.307109 containerd[1540]: time="2025-12-16T13:08:33.305899271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d49c7467-qxkcz,Uid:3c97abc9-28b5-46fd-ac48-9268ba05dd67,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:08:33.338675 containerd[1540]: time="2025-12-16T13:08:33.338618765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9f569d77f-ndtwq,Uid:9e067442-1617-4c9f-a618-5f4c28d671bd,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:33.339534 containerd[1540]: time="2025-12-16T13:08:33.339494863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ccf744969-phnb6,Uid:4a2f976f-c74a-49b6-b4f9-45981d88e7c0,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:33.349796 containerd[1540]: time="2025-12-16T13:08:33.349726950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d49c7467-w7dmv,Uid:c5c3da84-f0c0-494e-b5ab-338a3db3dbfc,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:08:33.353047 containerd[1540]: time="2025-12-16T13:08:33.352985708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tgf4g,Uid:6c535668-f4bd-4af5-9cdf-87c693c12696,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:33.520193 kubelet[2726]: E1216 13:08:33.519766 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:33.594812 containerd[1540]: time="2025-12-16T13:08:33.594738976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 13:08:33.707392 containerd[1540]: time="2025-12-16T13:08:33.707210481Z" level=error msg="Failed to destroy network for sandbox \"8ce60900cb75585ecfa9dd0b985863ae98bfcd1dafa8f24b0592666e796fdcea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.711270 containerd[1540]: time="2025-12-16T13:08:33.710796093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pz4l6,Uid:ede2e643-66a7-48be-bdf2-068efa6cf822,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ce60900cb75585ecfa9dd0b985863ae98bfcd1dafa8f24b0592666e796fdcea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.724906 kubelet[2726]: E1216 13:08:33.723867 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ce60900cb75585ecfa9dd0b985863ae98bfcd1dafa8f24b0592666e796fdcea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.724906 kubelet[2726]: E1216 13:08:33.723981 2726 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ce60900cb75585ecfa9dd0b985863ae98bfcd1dafa8f24b0592666e796fdcea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pz4l6" Dec 16 13:08:33.724906 kubelet[2726]: E1216 13:08:33.724021 2726 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ce60900cb75585ecfa9dd0b985863ae98bfcd1dafa8f24b0592666e796fdcea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pz4l6" Dec 16 13:08:33.726334 kubelet[2726]: E1216 13:08:33.724128 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-pz4l6_kube-system(ede2e643-66a7-48be-bdf2-068efa6cf822)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-pz4l6_kube-system(ede2e643-66a7-48be-bdf2-068efa6cf822)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ce60900cb75585ecfa9dd0b985863ae98bfcd1dafa8f24b0592666e796fdcea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-pz4l6" podUID="ede2e643-66a7-48be-bdf2-068efa6cf822" Dec 16 13:08:33.762476 containerd[1540]: time="2025-12-16T13:08:33.762410058Z" level=error msg="Failed to destroy network for sandbox \"3952ad311662b995f1d7e64c977f741f380304cd2b10edee3ef06fef882d99cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.770328 containerd[1540]: time="2025-12-16T13:08:33.770148293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2ktzm,Uid:ed9a1f64-2701-467a-bb1c-5afefbeb30b1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3952ad311662b995f1d7e64c977f741f380304cd2b10edee3ef06fef882d99cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.774898 kubelet[2726]: E1216 13:08:33.771504 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3952ad311662b995f1d7e64c977f741f380304cd2b10edee3ef06fef882d99cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.774898 kubelet[2726]: E1216 13:08:33.771608 2726 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3952ad311662b995f1d7e64c977f741f380304cd2b10edee3ef06fef882d99cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2ktzm" Dec 16 13:08:33.774898 kubelet[2726]: E1216 13:08:33.771645 2726 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3952ad311662b995f1d7e64c977f741f380304cd2b10edee3ef06fef882d99cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2ktzm" Dec 16 13:08:33.775249 kubelet[2726]: E1216 13:08:33.771749 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-2ktzm_kube-system(ed9a1f64-2701-467a-bb1c-5afefbeb30b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-2ktzm_kube-system(ed9a1f64-2701-467a-bb1c-5afefbeb30b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3952ad311662b995f1d7e64c977f741f380304cd2b10edee3ef06fef882d99cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-2ktzm" podUID="ed9a1f64-2701-467a-bb1c-5afefbeb30b1" Dec 16 13:08:33.810982 containerd[1540]: time="2025-12-16T13:08:33.810901900Z" level=error msg="Failed to destroy network for sandbox \"51d5cd0e6d73332b6546de79be92e344352ae7767ead08cb8e9becf5aba7690f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.813085 containerd[1540]: time="2025-12-16T13:08:33.812995792Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ccf744969-phnb6,Uid:4a2f976f-c74a-49b6-b4f9-45981d88e7c0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"51d5cd0e6d73332b6546de79be92e344352ae7767ead08cb8e9becf5aba7690f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.814021 kubelet[2726]: E1216 13:08:33.813771 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51d5cd0e6d73332b6546de79be92e344352ae7767ead08cb8e9becf5aba7690f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.814496 kubelet[2726]: E1216 13:08:33.813990 2726 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51d5cd0e6d73332b6546de79be92e344352ae7767ead08cb8e9becf5aba7690f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6ccf744969-phnb6" Dec 16 13:08:33.814496 kubelet[2726]: E1216 13:08:33.814218 2726 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51d5cd0e6d73332b6546de79be92e344352ae7767ead08cb8e9becf5aba7690f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6ccf744969-phnb6" Dec 16 13:08:33.814987 kubelet[2726]: E1216 13:08:33.814714 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6ccf744969-phnb6_calico-system(4a2f976f-c74a-49b6-b4f9-45981d88e7c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6ccf744969-phnb6_calico-system(4a2f976f-c74a-49b6-b4f9-45981d88e7c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51d5cd0e6d73332b6546de79be92e344352ae7767ead08cb8e9becf5aba7690f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6ccf744969-phnb6" podUID="4a2f976f-c74a-49b6-b4f9-45981d88e7c0" Dec 16 13:08:33.827576 containerd[1540]: time="2025-12-16T13:08:33.827505842Z" level=error msg="Failed to destroy network for sandbox \"35c60ed27b8b72dfc94eb487e84c75ee3c208d45d5f88195b93bcbf2e9468f01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.842722 containerd[1540]: time="2025-12-16T13:08:33.842457658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tgf4g,Uid:6c535668-f4bd-4af5-9cdf-87c693c12696,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"35c60ed27b8b72dfc94eb487e84c75ee3c208d45d5f88195b93bcbf2e9468f01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.849558 kubelet[2726]: E1216 13:08:33.842873 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35c60ed27b8b72dfc94eb487e84c75ee3c208d45d5f88195b93bcbf2e9468f01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.849558 kubelet[2726]: E1216 13:08:33.848731 2726 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35c60ed27b8b72dfc94eb487e84c75ee3c208d45d5f88195b93bcbf2e9468f01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-tgf4g" Dec 16 13:08:33.849558 kubelet[2726]: E1216 13:08:33.848786 2726 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35c60ed27b8b72dfc94eb487e84c75ee3c208d45d5f88195b93bcbf2e9468f01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-tgf4g" Dec 16 13:08:33.849846 kubelet[2726]: E1216 13:08:33.848887 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-tgf4g_calico-system(6c535668-f4bd-4af5-9cdf-87c693c12696)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-tgf4g_calico-system(6c535668-f4bd-4af5-9cdf-87c693c12696)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35c60ed27b8b72dfc94eb487e84c75ee3c208d45d5f88195b93bcbf2e9468f01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-tgf4g" podUID="6c535668-f4bd-4af5-9cdf-87c693c12696" Dec 16 13:08:33.856403 systemd[1]: run-netns-cni\x2d328f18e0\x2d2842\x2df83c\x2d369d\x2d5a92fb42aa08.mount: Deactivated successfully. Dec 16 13:08:33.870668 containerd[1540]: time="2025-12-16T13:08:33.868613092Z" level=error msg="Failed to destroy network for sandbox \"a190bb851d8e0600d11d5d4c0eb9f37ee56651df0015462d724fc3f9a5d819dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.875326 containerd[1540]: time="2025-12-16T13:08:33.874848970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d49c7467-qxkcz,Uid:3c97abc9-28b5-46fd-ac48-9268ba05dd67,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a190bb851d8e0600d11d5d4c0eb9f37ee56651df0015462d724fc3f9a5d819dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.877472 systemd[1]: run-netns-cni\x2d2df32a7f\x2d6617\x2d2ab0\x2d2e63\x2dac5438df4c29.mount: Deactivated successfully. Dec 16 13:08:33.878450 kubelet[2726]: E1216 13:08:33.877339 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a190bb851d8e0600d11d5d4c0eb9f37ee56651df0015462d724fc3f9a5d819dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.879745 kubelet[2726]: E1216 13:08:33.879704 2726 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a190bb851d8e0600d11d5d4c0eb9f37ee56651df0015462d724fc3f9a5d819dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d49c7467-qxkcz" Dec 16 13:08:33.879855 kubelet[2726]: E1216 13:08:33.879748 2726 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a190bb851d8e0600d11d5d4c0eb9f37ee56651df0015462d724fc3f9a5d819dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d49c7467-qxkcz" Dec 16 13:08:33.879905 kubelet[2726]: E1216 13:08:33.879878 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d49c7467-qxkcz_calico-apiserver(3c97abc9-28b5-46fd-ac48-9268ba05dd67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d49c7467-qxkcz_calico-apiserver(3c97abc9-28b5-46fd-ac48-9268ba05dd67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a190bb851d8e0600d11d5d4c0eb9f37ee56651df0015462d724fc3f9a5d819dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d49c7467-qxkcz" podUID="3c97abc9-28b5-46fd-ac48-9268ba05dd67" Dec 16 13:08:33.892420 containerd[1540]: time="2025-12-16T13:08:33.891738424Z" level=error msg="Failed to destroy network for sandbox \"757e3f7118c370b895d487326a57ae55671dbe7497c8fa5b8c7955230cbf1b6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.895235 containerd[1540]: time="2025-12-16T13:08:33.895156358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d49c7467-w7dmv,Uid:c5c3da84-f0c0-494e-b5ab-338a3db3dbfc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"757e3f7118c370b895d487326a57ae55671dbe7497c8fa5b8c7955230cbf1b6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.896654 systemd[1]: run-netns-cni\x2d444e4e48\x2ddb7d\x2d7d85\x2da859\x2df1d71dea20f9.mount: Deactivated successfully. Dec 16 13:08:33.897507 kubelet[2726]: E1216 13:08:33.897001 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"757e3f7118c370b895d487326a57ae55671dbe7497c8fa5b8c7955230cbf1b6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.897507 kubelet[2726]: E1216 13:08:33.897088 2726 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"757e3f7118c370b895d487326a57ae55671dbe7497c8fa5b8c7955230cbf1b6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d49c7467-w7dmv" Dec 16 13:08:33.897507 kubelet[2726]: E1216 13:08:33.897133 2726 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"757e3f7118c370b895d487326a57ae55671dbe7497c8fa5b8c7955230cbf1b6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d49c7467-w7dmv" Dec 16 13:08:33.897677 kubelet[2726]: E1216 13:08:33.897195 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d49c7467-w7dmv_calico-apiserver(c5c3da84-f0c0-494e-b5ab-338a3db3dbfc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d49c7467-w7dmv_calico-apiserver(c5c3da84-f0c0-494e-b5ab-338a3db3dbfc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"757e3f7118c370b895d487326a57ae55671dbe7497c8fa5b8c7955230cbf1b6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d49c7467-w7dmv" podUID="c5c3da84-f0c0-494e-b5ab-338a3db3dbfc" Dec 16 13:08:33.914451 containerd[1540]: time="2025-12-16T13:08:33.914337925Z" level=error msg="Failed to destroy network for sandbox \"5296fca4665918bae853be38d05f709c700b6efa284b8c273d4739f4126e5671\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.917882 containerd[1540]: time="2025-12-16T13:08:33.916998115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9f569d77f-ndtwq,Uid:9e067442-1617-4c9f-a618-5f4c28d671bd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5296fca4665918bae853be38d05f709c700b6efa284b8c273d4739f4126e5671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.918711 kubelet[2726]: E1216 13:08:33.918629 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5296fca4665918bae853be38d05f709c700b6efa284b8c273d4739f4126e5671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:33.918814 kubelet[2726]: E1216 13:08:33.918760 2726 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5296fca4665918bae853be38d05f709c700b6efa284b8c273d4739f4126e5671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9f569d77f-ndtwq" Dec 16 13:08:33.918849 kubelet[2726]: E1216 13:08:33.918798 2726 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5296fca4665918bae853be38d05f709c700b6efa284b8c273d4739f4126e5671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9f569d77f-ndtwq" Dec 16 13:08:33.919562 kubelet[2726]: E1216 13:08:33.918924 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9f569d77f-ndtwq_calico-system(9e067442-1617-4c9f-a618-5f4c28d671bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9f569d77f-ndtwq_calico-system(9e067442-1617-4c9f-a618-5f4c28d671bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5296fca4665918bae853be38d05f709c700b6efa284b8c273d4739f4126e5671\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9f569d77f-ndtwq" podUID="9e067442-1617-4c9f-a618-5f4c28d671bd" Dec 16 13:08:33.919896 systemd[1]: run-netns-cni\x2dbbdddfc9\x2d6502\x2d1df6\x2decdb\x2d5de06ac9f6d7.mount: Deactivated successfully. Dec 16 13:08:34.178192 systemd[1]: Created slice kubepods-besteffort-pod5092d504_cc04_4db5_bde7_b900923744da.slice - libcontainer container kubepods-besteffort-pod5092d504_cc04_4db5_bde7_b900923744da.slice. Dec 16 13:08:34.183943 containerd[1540]: time="2025-12-16T13:08:34.183894628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zqzhv,Uid:5092d504-cc04-4db5-bde7-b900923744da,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:34.277497 containerd[1540]: time="2025-12-16T13:08:34.277360276Z" level=error msg="Failed to destroy network for sandbox \"2d882e1abaaa0942b3d53fd178e95746cbacdc1ac35c4e694f470b203e2130d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:34.278982 containerd[1540]: time="2025-12-16T13:08:34.278886229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zqzhv,Uid:5092d504-cc04-4db5-bde7-b900923744da,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d882e1abaaa0942b3d53fd178e95746cbacdc1ac35c4e694f470b203e2130d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:34.279871 kubelet[2726]: E1216 13:08:34.279563 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d882e1abaaa0942b3d53fd178e95746cbacdc1ac35c4e694f470b203e2130d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:34.279871 kubelet[2726]: E1216 13:08:34.279659 2726 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d882e1abaaa0942b3d53fd178e95746cbacdc1ac35c4e694f470b203e2130d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zqzhv" Dec 16 13:08:34.279871 kubelet[2726]: E1216 13:08:34.279696 2726 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d882e1abaaa0942b3d53fd178e95746cbacdc1ac35c4e694f470b203e2130d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zqzhv" Dec 16 13:08:34.281823 kubelet[2726]: E1216 13:08:34.280483 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zqzhv_calico-system(5092d504-cc04-4db5-bde7-b900923744da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zqzhv_calico-system(5092d504-cc04-4db5-bde7-b900923744da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d882e1abaaa0942b3d53fd178e95746cbacdc1ac35c4e694f470b203e2130d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zqzhv" podUID="5092d504-cc04-4db5-bde7-b900923744da" Dec 16 13:08:40.923783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount246760569.mount: Deactivated successfully. Dec 16 13:08:41.087411 containerd[1540]: time="2025-12-16T13:08:41.085805051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 16 13:08:41.212389 containerd[1540]: time="2025-12-16T13:08:41.212191505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:41.225237 containerd[1540]: time="2025-12-16T13:08:41.225153398Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:41.248736 containerd[1540]: time="2025-12-16T13:08:41.248640264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:41.249884 containerd[1540]: time="2025-12-16T13:08:41.249810924Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.654984387s" Dec 16 13:08:41.249884 containerd[1540]: time="2025-12-16T13:08:41.249873878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 13:08:41.347160 containerd[1540]: time="2025-12-16T13:08:41.347036987Z" level=info msg="CreateContainer within sandbox \"cae460e2b5adbf9aea59528e512148e24629537229b639593c8dd47c6caa00e2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 13:08:41.419956 containerd[1540]: time="2025-12-16T13:08:41.419061074Z" level=info msg="Container 395dae75a1ee3f4cb0fbb5419daef32e5cb473fd5feed5491520b5b0a182694e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:41.425233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2270928096.mount: Deactivated successfully. Dec 16 13:08:41.464060 containerd[1540]: time="2025-12-16T13:08:41.463873023Z" level=info msg="CreateContainer within sandbox \"cae460e2b5adbf9aea59528e512148e24629537229b639593c8dd47c6caa00e2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"395dae75a1ee3f4cb0fbb5419daef32e5cb473fd5feed5491520b5b0a182694e\"" Dec 16 13:08:41.467642 containerd[1540]: time="2025-12-16T13:08:41.467577879Z" level=info msg="StartContainer for \"395dae75a1ee3f4cb0fbb5419daef32e5cb473fd5feed5491520b5b0a182694e\"" Dec 16 13:08:41.479093 containerd[1540]: time="2025-12-16T13:08:41.479001242Z" level=info msg="connecting to shim 395dae75a1ee3f4cb0fbb5419daef32e5cb473fd5feed5491520b5b0a182694e" address="unix:///run/containerd/s/efa919ef76244a4d542f9fb3a7e7449cff0dfc2ea2579d2b9bb5c181e82d32e2" protocol=ttrpc version=3 Dec 16 13:08:41.692737 systemd[1]: Started cri-containerd-395dae75a1ee3f4cb0fbb5419daef32e5cb473fd5feed5491520b5b0a182694e.scope - libcontainer container 395dae75a1ee3f4cb0fbb5419daef32e5cb473fd5feed5491520b5b0a182694e. Dec 16 13:08:41.879971 containerd[1540]: time="2025-12-16T13:08:41.879914338Z" level=info msg="StartContainer for \"395dae75a1ee3f4cb0fbb5419daef32e5cb473fd5feed5491520b5b0a182694e\" returns successfully" Dec 16 13:08:42.036907 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 13:08:42.037113 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 13:08:42.480569 kubelet[2726]: I1216 13:08:42.480298 2726 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a2f976f-c74a-49b6-b4f9-45981d88e7c0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4a2f976f-c74a-49b6-b4f9-45981d88e7c0" (UID: "4a2f976f-c74a-49b6-b4f9-45981d88e7c0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:08:42.481220 kubelet[2726]: I1216 13:08:42.480640 2726 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a2f976f-c74a-49b6-b4f9-45981d88e7c0-whisker-ca-bundle\") pod \"4a2f976f-c74a-49b6-b4f9-45981d88e7c0\" (UID: \"4a2f976f-c74a-49b6-b4f9-45981d88e7c0\") " Dec 16 13:08:42.481220 kubelet[2726]: I1216 13:08:42.480964 2726 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncrwp\" (UniqueName: \"kubernetes.io/projected/4a2f976f-c74a-49b6-b4f9-45981d88e7c0-kube-api-access-ncrwp\") pod \"4a2f976f-c74a-49b6-b4f9-45981d88e7c0\" (UID: \"4a2f976f-c74a-49b6-b4f9-45981d88e7c0\") " Dec 16 13:08:42.481808 kubelet[2726]: I1216 13:08:42.481316 2726 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4a2f976f-c74a-49b6-b4f9-45981d88e7c0-whisker-backend-key-pair\") pod \"4a2f976f-c74a-49b6-b4f9-45981d88e7c0\" (UID: \"4a2f976f-c74a-49b6-b4f9-45981d88e7c0\") " Dec 16 13:08:42.484216 kubelet[2726]: I1216 13:08:42.481941 2726 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a2f976f-c74a-49b6-b4f9-45981d88e7c0-whisker-ca-bundle\") on node \"ci-4459.2.2-e-d5fd5cf192\" DevicePath \"\"" Dec 16 13:08:42.499860 systemd[1]: var-lib-kubelet-pods-4a2f976f\x2dc74a\x2d49b6\x2db4f9\x2d45981d88e7c0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 13:08:42.503975 kubelet[2726]: I1216 13:08:42.502701 2726 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a2f976f-c74a-49b6-b4f9-45981d88e7c0-kube-api-access-ncrwp" (OuterVolumeSpecName: "kube-api-access-ncrwp") pod "4a2f976f-c74a-49b6-b4f9-45981d88e7c0" (UID: "4a2f976f-c74a-49b6-b4f9-45981d88e7c0"). InnerVolumeSpecName "kube-api-access-ncrwp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:08:42.504664 kubelet[2726]: I1216 13:08:42.504614 2726 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a2f976f-c74a-49b6-b4f9-45981d88e7c0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4a2f976f-c74a-49b6-b4f9-45981d88e7c0" (UID: "4a2f976f-c74a-49b6-b4f9-45981d88e7c0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:08:42.508837 systemd[1]: var-lib-kubelet-pods-4a2f976f\x2dc74a\x2d49b6\x2db4f9\x2d45981d88e7c0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dncrwp.mount: Deactivated successfully. Dec 16 13:08:42.579381 kubelet[2726]: E1216 13:08:42.577387 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:42.582439 kubelet[2726]: I1216 13:08:42.582388 2726 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ncrwp\" (UniqueName: \"kubernetes.io/projected/4a2f976f-c74a-49b6-b4f9-45981d88e7c0-kube-api-access-ncrwp\") on node \"ci-4459.2.2-e-d5fd5cf192\" DevicePath \"\"" Dec 16 13:08:42.582439 kubelet[2726]: I1216 13:08:42.582427 2726 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4a2f976f-c74a-49b6-b4f9-45981d88e7c0-whisker-backend-key-pair\") on node \"ci-4459.2.2-e-d5fd5cf192\" DevicePath \"\"" Dec 16 13:08:42.588619 systemd[1]: Removed slice kubepods-besteffort-pod4a2f976f_c74a_49b6_b4f9_45981d88e7c0.slice - libcontainer container kubepods-besteffort-pod4a2f976f_c74a_49b6_b4f9_45981d88e7c0.slice. Dec 16 13:08:42.621232 kubelet[2726]: I1216 13:08:42.621092 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lsfj8" podStartSLOduration=2.14306955 podStartE2EDuration="20.621017725s" podCreationTimestamp="2025-12-16 13:08:22 +0000 UTC" firstStartedPulling="2025-12-16 13:08:22.816014745 +0000 UTC m=+27.148874882" lastFinishedPulling="2025-12-16 13:08:41.293962927 +0000 UTC m=+45.626823057" observedRunningTime="2025-12-16 13:08:42.620189123 +0000 UTC m=+46.953049273" watchObservedRunningTime="2025-12-16 13:08:42.621017725 +0000 UTC m=+46.953877884" Dec 16 13:08:42.757438 systemd[1]: Created slice kubepods-besteffort-pod7c8a8da2_19eb_4046_963e_f1ae60b760a8.slice - libcontainer container kubepods-besteffort-pod7c8a8da2_19eb_4046_963e_f1ae60b760a8.slice. Dec 16 13:08:42.884321 kubelet[2726]: I1216 13:08:42.884182 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7c8a8da2-19eb-4046-963e-f1ae60b760a8-whisker-backend-key-pair\") pod \"whisker-8c8d47c9c-47lfm\" (UID: \"7c8a8da2-19eb-4046-963e-f1ae60b760a8\") " pod="calico-system/whisker-8c8d47c9c-47lfm" Dec 16 13:08:42.884321 kubelet[2726]: I1216 13:08:42.884261 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c8a8da2-19eb-4046-963e-f1ae60b760a8-whisker-ca-bundle\") pod \"whisker-8c8d47c9c-47lfm\" (UID: \"7c8a8da2-19eb-4046-963e-f1ae60b760a8\") " pod="calico-system/whisker-8c8d47c9c-47lfm" Dec 16 13:08:42.884321 kubelet[2726]: I1216 13:08:42.884300 2726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q868x\" (UniqueName: \"kubernetes.io/projected/7c8a8da2-19eb-4046-963e-f1ae60b760a8-kube-api-access-q868x\") pod \"whisker-8c8d47c9c-47lfm\" (UID: \"7c8a8da2-19eb-4046-963e-f1ae60b760a8\") " pod="calico-system/whisker-8c8d47c9c-47lfm" Dec 16 13:08:43.068825 containerd[1540]: time="2025-12-16T13:08:43.068163651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8c8d47c9c-47lfm,Uid:7c8a8da2-19eb-4046-963e-f1ae60b760a8,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:43.411984 systemd-networkd[1426]: cali96a3912d373: Link UP Dec 16 13:08:43.412319 systemd-networkd[1426]: cali96a3912d373: Gained carrier Dec 16 13:08:43.434111 containerd[1540]: 2025-12-16 13:08:43.135 [INFO][3742] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:08:43.434111 containerd[1540]: 2025-12-16 13:08:43.174 [INFO][3742] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--e--d5fd5cf192-k8s-whisker--8c8d47c9c--47lfm-eth0 whisker-8c8d47c9c- calico-system 7c8a8da2-19eb-4046-963e-f1ae60b760a8 974 0 2025-12-16 13:08:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8c8d47c9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.2-e-d5fd5cf192 whisker-8c8d47c9c-47lfm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali96a3912d373 [] [] }} ContainerID="886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926" Namespace="calico-system" Pod="whisker-8c8d47c9c-47lfm" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-whisker--8c8d47c9c--47lfm-" Dec 16 13:08:43.434111 containerd[1540]: 2025-12-16 13:08:43.174 [INFO][3742] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926" Namespace="calico-system" Pod="whisker-8c8d47c9c-47lfm" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-whisker--8c8d47c9c--47lfm-eth0" Dec 16 13:08:43.434111 containerd[1540]: 2025-12-16 13:08:43.323 [INFO][3755] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926" HandleID="k8s-pod-network.886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-whisker--8c8d47c9c--47lfm-eth0" Dec 16 13:08:43.434463 containerd[1540]: 2025-12-16 13:08:43.324 [INFO][3755] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926" HandleID="k8s-pod-network.886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-whisker--8c8d47c9c--47lfm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5df0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-e-d5fd5cf192", "pod":"whisker-8c8d47c9c-47lfm", "timestamp":"2025-12-16 13:08:43.323187308 +0000 UTC"}, Hostname:"ci-4459.2.2-e-d5fd5cf192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:08:43.434463 containerd[1540]: 2025-12-16 13:08:43.324 [INFO][3755] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:08:43.434463 containerd[1540]: 2025-12-16 13:08:43.324 [INFO][3755] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:08:43.434463 containerd[1540]: 2025-12-16 13:08:43.325 [INFO][3755] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-e-d5fd5cf192' Dec 16 13:08:43.434463 containerd[1540]: 2025-12-16 13:08:43.343 [INFO][3755] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:43.434463 containerd[1540]: 2025-12-16 13:08:43.355 [INFO][3755] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:43.434463 containerd[1540]: 2025-12-16 13:08:43.363 [INFO][3755] ipam/ipam.go 511: Trying affinity for 192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:43.434463 containerd[1540]: 2025-12-16 13:08:43.366 [INFO][3755] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:43.434463 containerd[1540]: 2025-12-16 13:08:43.371 [INFO][3755] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:43.434825 containerd[1540]: 2025-12-16 13:08:43.371 [INFO][3755] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.64/26 handle="k8s-pod-network.886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:43.434825 containerd[1540]: 2025-12-16 13:08:43.374 [INFO][3755] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926 Dec 16 13:08:43.434825 containerd[1540]: 2025-12-16 13:08:43.382 [INFO][3755] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.64/26 handle="k8s-pod-network.886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:43.434825 containerd[1540]: 2025-12-16 13:08:43.388 [INFO][3755] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.65/26] block=192.168.103.64/26 handle="k8s-pod-network.886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:43.434825 containerd[1540]: 2025-12-16 13:08:43.389 [INFO][3755] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.65/26] handle="k8s-pod-network.886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:43.434825 containerd[1540]: 2025-12-16 13:08:43.389 [INFO][3755] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:08:43.434825 containerd[1540]: 2025-12-16 13:08:43.389 [INFO][3755] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.65/26] IPv6=[] ContainerID="886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926" HandleID="k8s-pod-network.886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-whisker--8c8d47c9c--47lfm-eth0" Dec 16 13:08:43.435062 containerd[1540]: 2025-12-16 13:08:43.392 [INFO][3742] cni-plugin/k8s.go 418: Populated endpoint ContainerID="886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926" Namespace="calico-system" Pod="whisker-8c8d47c9c-47lfm" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-whisker--8c8d47c9c--47lfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--e--d5fd5cf192-k8s-whisker--8c8d47c9c--47lfm-eth0", GenerateName:"whisker-8c8d47c9c-", Namespace:"calico-system", SelfLink:"", UID:"7c8a8da2-19eb-4046-963e-f1ae60b760a8", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8c8d47c9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-e-d5fd5cf192", ContainerID:"", Pod:"whisker-8c8d47c9c-47lfm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.103.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali96a3912d373", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:43.435062 containerd[1540]: 2025-12-16 13:08:43.392 [INFO][3742] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.65/32] ContainerID="886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926" Namespace="calico-system" Pod="whisker-8c8d47c9c-47lfm" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-whisker--8c8d47c9c--47lfm-eth0" Dec 16 13:08:43.436175 containerd[1540]: 2025-12-16 13:08:43.392 [INFO][3742] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96a3912d373 ContainerID="886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926" Namespace="calico-system" Pod="whisker-8c8d47c9c-47lfm" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-whisker--8c8d47c9c--47lfm-eth0" Dec 16 13:08:43.436175 containerd[1540]: 2025-12-16 13:08:43.408 [INFO][3742] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926" Namespace="calico-system" Pod="whisker-8c8d47c9c-47lfm" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-whisker--8c8d47c9c--47lfm-eth0" Dec 16 13:08:43.436229 containerd[1540]: 2025-12-16 13:08:43.410 [INFO][3742] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926" Namespace="calico-system" Pod="whisker-8c8d47c9c-47lfm" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-whisker--8c8d47c9c--47lfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--e--d5fd5cf192-k8s-whisker--8c8d47c9c--47lfm-eth0", GenerateName:"whisker-8c8d47c9c-", Namespace:"calico-system", SelfLink:"", UID:"7c8a8da2-19eb-4046-963e-f1ae60b760a8", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8c8d47c9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-e-d5fd5cf192", ContainerID:"886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926", Pod:"whisker-8c8d47c9c-47lfm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.103.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali96a3912d373", MAC:"3e:9f:5e:c2:d3:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:43.438239 containerd[1540]: 2025-12-16 13:08:43.425 [INFO][3742] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926" Namespace="calico-system" Pod="whisker-8c8d47c9c-47lfm" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-whisker--8c8d47c9c--47lfm-eth0" Dec 16 13:08:43.548358 containerd[1540]: time="2025-12-16T13:08:43.548237938Z" level=info msg="connecting to shim 886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926" address="unix:///run/containerd/s/be43c53098013c3a740ca4399d83683aca15fc5fb85d9f7b9c4691755784af70" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:43.584260 kubelet[2726]: I1216 13:08:43.584178 2726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:08:43.586293 kubelet[2726]: E1216 13:08:43.586261 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:43.615714 systemd[1]: Started cri-containerd-886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926.scope - libcontainer container 886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926. Dec 16 13:08:43.708518 containerd[1540]: time="2025-12-16T13:08:43.707110691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8c8d47c9c-47lfm,Uid:7c8a8da2-19eb-4046-963e-f1ae60b760a8,Namespace:calico-system,Attempt:0,} returns sandbox id \"886323063430f12e80f9189e0fc95a0cfcb14499e8f282d9e0c4b289d2252926\"" Dec 16 13:08:43.711315 containerd[1540]: time="2025-12-16T13:08:43.711256210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:08:44.127767 containerd[1540]: time="2025-12-16T13:08:44.127684896Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:44.128914 containerd[1540]: time="2025-12-16T13:08:44.128848671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:08:44.136610 containerd[1540]: time="2025-12-16T13:08:44.128902153Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:08:44.137101 kubelet[2726]: E1216 13:08:44.137046 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:08:44.137304 kubelet[2726]: E1216 13:08:44.137126 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:08:44.137304 kubelet[2726]: E1216 13:08:44.137260 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8c8d47c9c-47lfm_calico-system(7c8a8da2-19eb-4046-963e-f1ae60b760a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:44.141419 containerd[1540]: time="2025-12-16T13:08:44.141239155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:08:44.177285 kubelet[2726]: I1216 13:08:44.177226 2726 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a2f976f-c74a-49b6-b4f9-45981d88e7c0" path="/var/lib/kubelet/pods/4a2f976f-c74a-49b6-b4f9-45981d88e7c0/volumes" Dec 16 13:08:44.179268 kubelet[2726]: E1216 13:08:44.179203 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:44.180301 containerd[1540]: time="2025-12-16T13:08:44.180242193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2ktzm,Uid:ed9a1f64-2701-467a-bb1c-5afefbeb30b1,Namespace:kube-system,Attempt:0,}" Dec 16 13:08:44.493876 systemd-networkd[1426]: cali2fc45fdd14f: Link UP Dec 16 13:08:44.498106 systemd-networkd[1426]: cali2fc45fdd14f: Gained carrier Dec 16 13:08:44.509090 containerd[1540]: time="2025-12-16T13:08:44.509025162Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:44.510537 containerd[1540]: time="2025-12-16T13:08:44.509956595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:08:44.510537 containerd[1540]: time="2025-12-16T13:08:44.510017174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:08:44.511658 kubelet[2726]: E1216 13:08:44.510947 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:08:44.511658 kubelet[2726]: E1216 13:08:44.511503 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:08:44.512395 kubelet[2726]: E1216 13:08:44.512211 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8c8d47c9c-47lfm_calico-system(7c8a8da2-19eb-4046-963e-f1ae60b760a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:44.513467 kubelet[2726]: E1216 13:08:44.513319 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8c8d47c9c-47lfm" podUID="7c8a8da2-19eb-4046-963e-f1ae60b760a8" Dec 16 13:08:44.544297 containerd[1540]: 2025-12-16 13:08:44.275 [INFO][3898] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:08:44.544297 containerd[1540]: 2025-12-16 13:08:44.302 [INFO][3898] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--2ktzm-eth0 coredns-66bc5c9577- kube-system ed9a1f64-2701-467a-bb1c-5afefbeb30b1 900 0 2025-12-16 13:08:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-e-d5fd5cf192 coredns-66bc5c9577-2ktzm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2fc45fdd14f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185" Namespace="kube-system" Pod="coredns-66bc5c9577-2ktzm" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--2ktzm-" Dec 16 13:08:44.544297 containerd[1540]: 2025-12-16 13:08:44.302 [INFO][3898] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185" Namespace="kube-system" Pod="coredns-66bc5c9577-2ktzm" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--2ktzm-eth0" Dec 16 13:08:44.544297 containerd[1540]: 2025-12-16 13:08:44.386 [INFO][3912] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185" HandleID="k8s-pod-network.c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--2ktzm-eth0" Dec 16 13:08:44.545059 containerd[1540]: 2025-12-16 13:08:44.386 [INFO][3912] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185" HandleID="k8s-pod-network.c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--2ktzm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ca120), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-e-d5fd5cf192", "pod":"coredns-66bc5c9577-2ktzm", "timestamp":"2025-12-16 13:08:44.386342199 +0000 UTC"}, Hostname:"ci-4459.2.2-e-d5fd5cf192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:08:44.545059 containerd[1540]: 2025-12-16 13:08:44.386 [INFO][3912] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:08:44.545059 containerd[1540]: 2025-12-16 13:08:44.386 [INFO][3912] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:08:44.545059 containerd[1540]: 2025-12-16 13:08:44.386 [INFO][3912] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-e-d5fd5cf192' Dec 16 13:08:44.545059 containerd[1540]: 2025-12-16 13:08:44.411 [INFO][3912] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:44.545059 containerd[1540]: 2025-12-16 13:08:44.422 [INFO][3912] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:44.545059 containerd[1540]: 2025-12-16 13:08:44.432 [INFO][3912] ipam/ipam.go 511: Trying affinity for 192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:44.545059 containerd[1540]: 2025-12-16 13:08:44.435 [INFO][3912] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:44.545059 containerd[1540]: 2025-12-16 13:08:44.441 [INFO][3912] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:44.547581 containerd[1540]: 2025-12-16 13:08:44.441 [INFO][3912] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.64/26 handle="k8s-pod-network.c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:44.547581 containerd[1540]: 2025-12-16 13:08:44.446 [INFO][3912] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185 Dec 16 13:08:44.547581 containerd[1540]: 2025-12-16 13:08:44.454 [INFO][3912] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.64/26 handle="k8s-pod-network.c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:44.547581 containerd[1540]: 2025-12-16 13:08:44.472 [INFO][3912] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.66/26] block=192.168.103.64/26 handle="k8s-pod-network.c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:44.547581 containerd[1540]: 2025-12-16 13:08:44.473 [INFO][3912] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.66/26] handle="k8s-pod-network.c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:44.547581 containerd[1540]: 2025-12-16 13:08:44.474 [INFO][3912] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:08:44.547581 containerd[1540]: 2025-12-16 13:08:44.474 [INFO][3912] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.66/26] IPv6=[] ContainerID="c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185" HandleID="k8s-pod-network.c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--2ktzm-eth0" Dec 16 13:08:44.547832 containerd[1540]: 2025-12-16 13:08:44.484 [INFO][3898] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185" Namespace="kube-system" Pod="coredns-66bc5c9577-2ktzm" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--2ktzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--2ktzm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ed9a1f64-2701-467a-bb1c-5afefbeb30b1", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-e-d5fd5cf192", ContainerID:"", Pod:"coredns-66bc5c9577-2ktzm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fc45fdd14f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:44.547832 containerd[1540]: 2025-12-16 13:08:44.484 [INFO][3898] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.66/32] ContainerID="c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185" Namespace="kube-system" Pod="coredns-66bc5c9577-2ktzm" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--2ktzm-eth0" Dec 16 13:08:44.547832 containerd[1540]: 2025-12-16 13:08:44.484 [INFO][3898] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2fc45fdd14f ContainerID="c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185" Namespace="kube-system" Pod="coredns-66bc5c9577-2ktzm" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--2ktzm-eth0" Dec 16 13:08:44.547832 containerd[1540]: 2025-12-16 13:08:44.502 [INFO][3898] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185" Namespace="kube-system" Pod="coredns-66bc5c9577-2ktzm" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--2ktzm-eth0" Dec 16 13:08:44.547832 containerd[1540]: 2025-12-16 13:08:44.503 [INFO][3898] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185" Namespace="kube-system" Pod="coredns-66bc5c9577-2ktzm" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--2ktzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--2ktzm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ed9a1f64-2701-467a-bb1c-5afefbeb30b1", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-e-d5fd5cf192", ContainerID:"c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185", Pod:"coredns-66bc5c9577-2ktzm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fc45fdd14f", MAC:"fe:a6:73:6e:d7:c6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:44.548186 containerd[1540]: 2025-12-16 13:08:44.538 [INFO][3898] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185" Namespace="kube-system" Pod="coredns-66bc5c9577-2ktzm" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--2ktzm-eth0" Dec 16 13:08:44.584310 systemd-networkd[1426]: cali96a3912d373: Gained IPv6LL Dec 16 13:08:44.618605 kubelet[2726]: E1216 13:08:44.618386 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8c8d47c9c-47lfm" podUID="7c8a8da2-19eb-4046-963e-f1ae60b760a8" Dec 16 13:08:44.626798 containerd[1540]: time="2025-12-16T13:08:44.626586095Z" level=info msg="connecting to shim c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185" address="unix:///run/containerd/s/07b283d97f4c69b1a6e3ce52ee7ec76fb80acd962f4cd48839b6808a46dcec6a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:44.701718 systemd[1]: Started cri-containerd-c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185.scope - libcontainer container c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185. Dec 16 13:08:44.856675 containerd[1540]: time="2025-12-16T13:08:44.856542380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2ktzm,Uid:ed9a1f64-2701-467a-bb1c-5afefbeb30b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185\"" Dec 16 13:08:44.859825 kubelet[2726]: E1216 13:08:44.859756 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:44.873044 containerd[1540]: time="2025-12-16T13:08:44.872659729Z" level=info msg="CreateContainer within sandbox \"c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:08:44.901965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4003747806.mount: Deactivated successfully. Dec 16 13:08:44.904285 containerd[1540]: time="2025-12-16T13:08:44.903986586Z" level=info msg="Container d73ad564fe9b8d22c65138585e906b2683f1dc88d403ed4902b47e7966bc6322: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:44.934322 containerd[1540]: time="2025-12-16T13:08:44.934123048Z" level=info msg="CreateContainer within sandbox \"c8dec2fee0c1c1e44c5efde7ea4f0ee446c1f68aa0846cbd3a9b5ba64b301185\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d73ad564fe9b8d22c65138585e906b2683f1dc88d403ed4902b47e7966bc6322\"" Dec 16 13:08:44.939238 containerd[1540]: time="2025-12-16T13:08:44.937397367Z" level=info msg="StartContainer for \"d73ad564fe9b8d22c65138585e906b2683f1dc88d403ed4902b47e7966bc6322\"" Dec 16 13:08:44.940590 containerd[1540]: time="2025-12-16T13:08:44.940386932Z" level=info msg="connecting to shim d73ad564fe9b8d22c65138585e906b2683f1dc88d403ed4902b47e7966bc6322" address="unix:///run/containerd/s/07b283d97f4c69b1a6e3ce52ee7ec76fb80acd962f4cd48839b6808a46dcec6a" protocol=ttrpc version=3 Dec 16 13:08:44.998757 systemd[1]: Started cri-containerd-d73ad564fe9b8d22c65138585e906b2683f1dc88d403ed4902b47e7966bc6322.scope - libcontainer container d73ad564fe9b8d22c65138585e906b2683f1dc88d403ed4902b47e7966bc6322. Dec 16 13:08:45.079201 containerd[1540]: time="2025-12-16T13:08:45.079130621Z" level=info msg="StartContainer for \"d73ad564fe9b8d22c65138585e906b2683f1dc88d403ed4902b47e7966bc6322\" returns successfully" Dec 16 13:08:45.177528 containerd[1540]: time="2025-12-16T13:08:45.177109557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d49c7467-w7dmv,Uid:c5c3da84-f0c0-494e-b5ab-338a3db3dbfc,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:08:45.179591 containerd[1540]: time="2025-12-16T13:08:45.179541660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d49c7467-qxkcz,Uid:3c97abc9-28b5-46fd-ac48-9268ba05dd67,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:08:45.202305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3159785215.mount: Deactivated successfully. Dec 16 13:08:45.469851 systemd-networkd[1426]: cali8b18c834fba: Link UP Dec 16 13:08:45.473120 systemd-networkd[1426]: cali8b18c834fba: Gained carrier Dec 16 13:08:45.497457 containerd[1540]: 2025-12-16 13:08:45.291 [INFO][4048] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--qxkcz-eth0 calico-apiserver-5d49c7467- calico-apiserver 3c97abc9-28b5-46fd-ac48-9268ba05dd67 901 0 2025-12-16 13:08:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d49c7467 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-e-d5fd5cf192 calico-apiserver-5d49c7467-qxkcz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8b18c834fba [] [] }} ContainerID="5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8" Namespace="calico-apiserver" Pod="calico-apiserver-5d49c7467-qxkcz" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--qxkcz-" Dec 16 13:08:45.497457 containerd[1540]: 2025-12-16 13:08:45.291 [INFO][4048] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8" Namespace="calico-apiserver" Pod="calico-apiserver-5d49c7467-qxkcz" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--qxkcz-eth0" Dec 16 13:08:45.497457 containerd[1540]: 2025-12-16 13:08:45.372 [INFO][4067] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8" HandleID="k8s-pod-network.5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--qxkcz-eth0" Dec 16 13:08:45.497457 containerd[1540]: 2025-12-16 13:08:45.375 [INFO][4067] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8" HandleID="k8s-pod-network.5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--qxkcz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039b1a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-e-d5fd5cf192", "pod":"calico-apiserver-5d49c7467-qxkcz", "timestamp":"2025-12-16 13:08:45.372935444 +0000 UTC"}, Hostname:"ci-4459.2.2-e-d5fd5cf192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:08:45.497457 containerd[1540]: 2025-12-16 13:08:45.376 [INFO][4067] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:08:45.497457 containerd[1540]: 2025-12-16 13:08:45.377 [INFO][4067] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:08:45.497457 containerd[1540]: 2025-12-16 13:08:45.377 [INFO][4067] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-e-d5fd5cf192' Dec 16 13:08:45.497457 containerd[1540]: 2025-12-16 13:08:45.403 [INFO][4067] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:45.497457 containerd[1540]: 2025-12-16 13:08:45.412 [INFO][4067] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:45.497457 containerd[1540]: 2025-12-16 13:08:45.420 [INFO][4067] ipam/ipam.go 511: Trying affinity for 192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:45.497457 containerd[1540]: 2025-12-16 13:08:45.423 [INFO][4067] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:45.497457 containerd[1540]: 2025-12-16 13:08:45.427 [INFO][4067] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:45.497457 containerd[1540]: 2025-12-16 13:08:45.427 [INFO][4067] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.64/26 handle="k8s-pod-network.5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:45.497457 containerd[1540]: 2025-12-16 13:08:45.430 [INFO][4067] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8 Dec 16 13:08:45.497457 containerd[1540]: 2025-12-16 13:08:45.438 [INFO][4067] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.64/26 handle="k8s-pod-network.5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:45.497457 containerd[1540]: 2025-12-16 13:08:45.449 [INFO][4067] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.67/26] block=192.168.103.64/26 handle="k8s-pod-network.5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:45.497457 containerd[1540]: 2025-12-16 13:08:45.450 [INFO][4067] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.67/26] handle="k8s-pod-network.5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:45.497457 containerd[1540]: 2025-12-16 13:08:45.450 [INFO][4067] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:08:45.497457 containerd[1540]: 2025-12-16 13:08:45.451 [INFO][4067] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.67/26] IPv6=[] ContainerID="5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8" HandleID="k8s-pod-network.5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--qxkcz-eth0" Dec 16 13:08:45.498230 containerd[1540]: 2025-12-16 13:08:45.459 [INFO][4048] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8" Namespace="calico-apiserver" Pod="calico-apiserver-5d49c7467-qxkcz" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--qxkcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--qxkcz-eth0", GenerateName:"calico-apiserver-5d49c7467-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c97abc9-28b5-46fd-ac48-9268ba05dd67", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d49c7467", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-e-d5fd5cf192", ContainerID:"", Pod:"calico-apiserver-5d49c7467-qxkcz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b18c834fba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:45.498230 containerd[1540]: 2025-12-16 13:08:45.460 [INFO][4048] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.67/32] ContainerID="5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8" Namespace="calico-apiserver" Pod="calico-apiserver-5d49c7467-qxkcz" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--qxkcz-eth0" Dec 16 13:08:45.498230 containerd[1540]: 2025-12-16 13:08:45.460 [INFO][4048] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b18c834fba ContainerID="5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8" Namespace="calico-apiserver" Pod="calico-apiserver-5d49c7467-qxkcz" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--qxkcz-eth0" Dec 16 13:08:45.498230 containerd[1540]: 2025-12-16 13:08:45.474 [INFO][4048] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8" Namespace="calico-apiserver" Pod="calico-apiserver-5d49c7467-qxkcz" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--qxkcz-eth0" Dec 16 13:08:45.498230 containerd[1540]: 2025-12-16 13:08:45.475 [INFO][4048] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8" Namespace="calico-apiserver" Pod="calico-apiserver-5d49c7467-qxkcz" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--qxkcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--qxkcz-eth0", GenerateName:"calico-apiserver-5d49c7467-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c97abc9-28b5-46fd-ac48-9268ba05dd67", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d49c7467", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-e-d5fd5cf192", ContainerID:"5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8", Pod:"calico-apiserver-5d49c7467-qxkcz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b18c834fba", MAC:"9a:b5:5c:89:e8:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:45.498230 containerd[1540]: 2025-12-16 13:08:45.494 [INFO][4048] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8" Namespace="calico-apiserver" Pod="calico-apiserver-5d49c7467-qxkcz" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--qxkcz-eth0" Dec 16 13:08:45.552915 containerd[1540]: time="2025-12-16T13:08:45.552755282Z" level=info msg="connecting to shim 5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8" address="unix:///run/containerd/s/7610fbde48f8826028a415dfd14e2906a5b66f9ace6ad5272856212a23054a05" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:45.616818 kubelet[2726]: E1216 13:08:45.616778 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:45.619961 kubelet[2726]: E1216 13:08:45.619041 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8c8d47c9c-47lfm" podUID="7c8a8da2-19eb-4046-963e-f1ae60b760a8" Dec 16 13:08:45.664710 systemd[1]: Started cri-containerd-5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8.scope - libcontainer container 5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8. Dec 16 13:08:45.689531 systemd-networkd[1426]: cali02db6889eb0: Link UP Dec 16 13:08:45.694014 systemd-networkd[1426]: cali02db6889eb0: Gained carrier Dec 16 13:08:45.707570 kubelet[2726]: I1216 13:08:45.706743 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2ktzm" podStartSLOduration=44.706692048 podStartE2EDuration="44.706692048s" podCreationTimestamp="2025-12-16 13:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:08:45.653065888 +0000 UTC m=+49.985926045" watchObservedRunningTime="2025-12-16 13:08:45.706692048 +0000 UTC m=+50.039552209" Dec 16 13:08:45.756059 containerd[1540]: 2025-12-16 13:08:45.361 [INFO][4045] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--w7dmv-eth0 calico-apiserver-5d49c7467- calico-apiserver c5c3da84-f0c0-494e-b5ab-338a3db3dbfc 903 0 2025-12-16 13:08:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d49c7467 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-e-d5fd5cf192 calico-apiserver-5d49c7467-w7dmv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali02db6889eb0 [] [] }} ContainerID="95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d" Namespace="calico-apiserver" Pod="calico-apiserver-5d49c7467-w7dmv" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--w7dmv-" Dec 16 13:08:45.756059 containerd[1540]: 2025-12-16 13:08:45.361 [INFO][4045] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d" Namespace="calico-apiserver" Pod="calico-apiserver-5d49c7467-w7dmv" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--w7dmv-eth0" Dec 16 13:08:45.756059 containerd[1540]: 2025-12-16 13:08:45.446 [INFO][4077] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d" HandleID="k8s-pod-network.95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--w7dmv-eth0" Dec 16 13:08:45.756059 containerd[1540]: 2025-12-16 13:08:45.447 [INFO][4077] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d" HandleID="k8s-pod-network.95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--w7dmv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f960), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-e-d5fd5cf192", "pod":"calico-apiserver-5d49c7467-w7dmv", "timestamp":"2025-12-16 13:08:45.446876704 +0000 UTC"}, Hostname:"ci-4459.2.2-e-d5fd5cf192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:08:45.756059 containerd[1540]: 2025-12-16 13:08:45.447 [INFO][4077] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:08:45.756059 containerd[1540]: 2025-12-16 13:08:45.450 [INFO][4077] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:08:45.756059 containerd[1540]: 2025-12-16 13:08:45.451 [INFO][4077] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-e-d5fd5cf192' Dec 16 13:08:45.756059 containerd[1540]: 2025-12-16 13:08:45.505 [INFO][4077] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:45.756059 containerd[1540]: 2025-12-16 13:08:45.516 [INFO][4077] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:45.756059 containerd[1540]: 2025-12-16 13:08:45.539 [INFO][4077] ipam/ipam.go 511: Trying affinity for 192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:45.756059 containerd[1540]: 2025-12-16 13:08:45.548 [INFO][4077] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:45.756059 containerd[1540]: 2025-12-16 13:08:45.556 [INFO][4077] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:45.756059 containerd[1540]: 2025-12-16 13:08:45.556 [INFO][4077] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.64/26 handle="k8s-pod-network.95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:45.756059 containerd[1540]: 2025-12-16 13:08:45.561 [INFO][4077] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d Dec 16 13:08:45.756059 containerd[1540]: 2025-12-16 13:08:45.581 [INFO][4077] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.64/26 handle="k8s-pod-network.95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:45.756059 containerd[1540]: 2025-12-16 13:08:45.627 [INFO][4077] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.68/26] block=192.168.103.64/26 handle="k8s-pod-network.95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:45.756059 containerd[1540]: 2025-12-16 13:08:45.630 [INFO][4077] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.68/26] handle="k8s-pod-network.95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:45.756059 containerd[1540]: 2025-12-16 13:08:45.632 [INFO][4077] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:08:45.756059 containerd[1540]: 2025-12-16 13:08:45.633 [INFO][4077] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.68/26] IPv6=[] ContainerID="95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d" HandleID="k8s-pod-network.95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--w7dmv-eth0" Dec 16 13:08:45.757007 containerd[1540]: 2025-12-16 13:08:45.655 [INFO][4045] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d" Namespace="calico-apiserver" Pod="calico-apiserver-5d49c7467-w7dmv" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--w7dmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--w7dmv-eth0", GenerateName:"calico-apiserver-5d49c7467-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5c3da84-f0c0-494e-b5ab-338a3db3dbfc", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d49c7467", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-e-d5fd5cf192", ContainerID:"", Pod:"calico-apiserver-5d49c7467-w7dmv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali02db6889eb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:45.757007 containerd[1540]: 2025-12-16 13:08:45.674 [INFO][4045] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.68/32] ContainerID="95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d" Namespace="calico-apiserver" Pod="calico-apiserver-5d49c7467-w7dmv" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--w7dmv-eth0" Dec 16 13:08:45.757007 containerd[1540]: 2025-12-16 13:08:45.675 [INFO][4045] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02db6889eb0 ContainerID="95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d" Namespace="calico-apiserver" Pod="calico-apiserver-5d49c7467-w7dmv" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--w7dmv-eth0" Dec 16 13:08:45.757007 containerd[1540]: 2025-12-16 13:08:45.691 [INFO][4045] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d" Namespace="calico-apiserver" Pod="calico-apiserver-5d49c7467-w7dmv" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--w7dmv-eth0" Dec 16 13:08:45.757007 containerd[1540]: 2025-12-16 13:08:45.704 [INFO][4045] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d" Namespace="calico-apiserver" Pod="calico-apiserver-5d49c7467-w7dmv" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--w7dmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--w7dmv-eth0", GenerateName:"calico-apiserver-5d49c7467-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5c3da84-f0c0-494e-b5ab-338a3db3dbfc", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d49c7467", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-e-d5fd5cf192", ContainerID:"95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d", Pod:"calico-apiserver-5d49c7467-w7dmv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali02db6889eb0", MAC:"a2:38:94:23:6c:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:45.757007 containerd[1540]: 2025-12-16 13:08:45.748 [INFO][4045] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d" Namespace="calico-apiserver" Pod="calico-apiserver-5d49c7467-w7dmv" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--apiserver--5d49c7467--w7dmv-eth0" Dec 16 13:08:45.821614 containerd[1540]: time="2025-12-16T13:08:45.821532534Z" level=info msg="connecting to shim 95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d" address="unix:///run/containerd/s/3e0e3c07e7b8dac690f8838ee2663a7c38b06d9cf5dd182559192b2b299f77e7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:45.885653 systemd[1]: Started cri-containerd-95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d.scope - libcontainer container 95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d. Dec 16 13:08:45.897206 containerd[1540]: time="2025-12-16T13:08:45.897151729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d49c7467-qxkcz,Uid:3c97abc9-28b5-46fd-ac48-9268ba05dd67,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5b3cdf7fed35c864e36a7cf94c330855ab34b3f796f3e8a574313f4c9eeca8f8\"" Dec 16 13:08:45.902374 containerd[1540]: time="2025-12-16T13:08:45.902300834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:08:46.003483 containerd[1540]: time="2025-12-16T13:08:46.003331258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d49c7467-w7dmv,Uid:c5c3da84-f0c0-494e-b5ab-338a3db3dbfc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"95f766b2812814e026950dfca23d2ef157504fe3d8d11a0abb9e7a73cf7ebf1d\"" Dec 16 13:08:46.048406 systemd-networkd[1426]: vxlan.calico: Link UP Dec 16 13:08:46.048419 systemd-networkd[1426]: vxlan.calico: Gained carrier Dec 16 13:08:46.172083 kubelet[2726]: E1216 13:08:46.172000 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:46.174104 containerd[1540]: time="2025-12-16T13:08:46.173966940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pz4l6,Uid:ede2e643-66a7-48be-bdf2-068efa6cf822,Namespace:kube-system,Attempt:0,}" Dec 16 13:08:46.175089 containerd[1540]: time="2025-12-16T13:08:46.174922833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9f569d77f-ndtwq,Uid:9e067442-1617-4c9f-a618-5f4c28d671bd,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:46.220077 containerd[1540]: time="2025-12-16T13:08:46.219775109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:46.221475 containerd[1540]: time="2025-12-16T13:08:46.221295665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:08:46.221475 containerd[1540]: time="2025-12-16T13:08:46.221434506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:08:46.223463 kubelet[2726]: E1216 13:08:46.221779 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:08:46.223610 kubelet[2726]: E1216 13:08:46.223439 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:08:46.223681 kubelet[2726]: E1216 13:08:46.223624 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d49c7467-qxkcz_calico-apiserver(3c97abc9-28b5-46fd-ac48-9268ba05dd67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:46.223681 kubelet[2726]: E1216 13:08:46.223664 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49c7467-qxkcz" podUID="3c97abc9-28b5-46fd-ac48-9268ba05dd67" Dec 16 13:08:46.225731 containerd[1540]: time="2025-12-16T13:08:46.225671077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:08:46.437545 systemd-networkd[1426]: cali2fc45fdd14f: Gained IPv6LL Dec 16 13:08:46.524812 systemd-networkd[1426]: cali216cf21af09: Link UP Dec 16 13:08:46.527643 systemd-networkd[1426]: cali216cf21af09: Gained carrier Dec 16 13:08:46.558388 containerd[1540]: 2025-12-16 13:08:46.310 [INFO][4227] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--pz4l6-eth0 coredns-66bc5c9577- kube-system ede2e643-66a7-48be-bdf2-068efa6cf822 892 0 2025-12-16 13:08:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-e-d5fd5cf192 coredns-66bc5c9577-pz4l6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali216cf21af09 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd" Namespace="kube-system" Pod="coredns-66bc5c9577-pz4l6" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--pz4l6-" Dec 16 13:08:46.558388 containerd[1540]: 2025-12-16 13:08:46.312 [INFO][4227] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd" Namespace="kube-system" Pod="coredns-66bc5c9577-pz4l6" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--pz4l6-eth0" Dec 16 13:08:46.558388 containerd[1540]: 2025-12-16 13:08:46.428 [INFO][4249] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd" HandleID="k8s-pod-network.e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--pz4l6-eth0" Dec 16 13:08:46.558388 containerd[1540]: 2025-12-16 13:08:46.429 [INFO][4249] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd" HandleID="k8s-pod-network.e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--pz4l6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-e-d5fd5cf192", "pod":"coredns-66bc5c9577-pz4l6", "timestamp":"2025-12-16 13:08:46.428442325 +0000 UTC"}, Hostname:"ci-4459.2.2-e-d5fd5cf192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:08:46.558388 containerd[1540]: 2025-12-16 13:08:46.429 [INFO][4249] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:08:46.558388 containerd[1540]: 2025-12-16 13:08:46.429 [INFO][4249] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:08:46.558388 containerd[1540]: 2025-12-16 13:08:46.430 [INFO][4249] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-e-d5fd5cf192' Dec 16 13:08:46.558388 containerd[1540]: 2025-12-16 13:08:46.451 [INFO][4249] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:46.558388 containerd[1540]: 2025-12-16 13:08:46.461 [INFO][4249] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:46.558388 containerd[1540]: 2025-12-16 13:08:46.473 [INFO][4249] ipam/ipam.go 511: Trying affinity for 192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:46.558388 containerd[1540]: 2025-12-16 13:08:46.478 [INFO][4249] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:46.558388 containerd[1540]: 2025-12-16 13:08:46.482 [INFO][4249] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:46.558388 containerd[1540]: 2025-12-16 13:08:46.482 [INFO][4249] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.64/26 handle="k8s-pod-network.e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:46.558388 containerd[1540]: 2025-12-16 13:08:46.485 [INFO][4249] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd Dec 16 13:08:46.558388 containerd[1540]: 2025-12-16 13:08:46.492 [INFO][4249] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.64/26 handle="k8s-pod-network.e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:46.558388 containerd[1540]: 2025-12-16 13:08:46.509 [INFO][4249] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.69/26] block=192.168.103.64/26 handle="k8s-pod-network.e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:46.558388 containerd[1540]: 2025-12-16 13:08:46.509 [INFO][4249] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.69/26] handle="k8s-pod-network.e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:46.558388 containerd[1540]: 2025-12-16 13:08:46.509 [INFO][4249] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:08:46.558388 containerd[1540]: 2025-12-16 13:08:46.509 [INFO][4249] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.69/26] IPv6=[] ContainerID="e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd" HandleID="k8s-pod-network.e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--pz4l6-eth0" Dec 16 13:08:46.560310 containerd[1540]: 2025-12-16 13:08:46.517 [INFO][4227] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd" Namespace="kube-system" Pod="coredns-66bc5c9577-pz4l6" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--pz4l6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--pz4l6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ede2e643-66a7-48be-bdf2-068efa6cf822", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-e-d5fd5cf192", ContainerID:"", Pod:"coredns-66bc5c9577-pz4l6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali216cf21af09", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:46.560310 containerd[1540]: 2025-12-16 13:08:46.518 [INFO][4227] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.69/32] ContainerID="e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd" Namespace="kube-system" Pod="coredns-66bc5c9577-pz4l6" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--pz4l6-eth0" Dec 16 13:08:46.560310 containerd[1540]: 2025-12-16 13:08:46.518 [INFO][4227] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali216cf21af09 ContainerID="e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd" Namespace="kube-system" Pod="coredns-66bc5c9577-pz4l6" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--pz4l6-eth0" Dec 16 13:08:46.560310 containerd[1540]: 2025-12-16 13:08:46.528 [INFO][4227] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd" Namespace="kube-system" Pod="coredns-66bc5c9577-pz4l6" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--pz4l6-eth0" Dec 16 13:08:46.560310 containerd[1540]: 2025-12-16 13:08:46.531 [INFO][4227] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd" Namespace="kube-system" Pod="coredns-66bc5c9577-pz4l6" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--pz4l6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--pz4l6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ede2e643-66a7-48be-bdf2-068efa6cf822", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-e-d5fd5cf192", ContainerID:"e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd", Pod:"coredns-66bc5c9577-pz4l6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali216cf21af09", MAC:"b2:4b:85:95:a3:ba", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:46.563481 containerd[1540]: 2025-12-16 13:08:46.551 [INFO][4227] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd" Namespace="kube-system" Pod="coredns-66bc5c9577-pz4l6" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-coredns--66bc5c9577--pz4l6-eth0" Dec 16 13:08:46.588429 containerd[1540]: time="2025-12-16T13:08:46.588161638Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:46.620128 containerd[1540]: time="2025-12-16T13:08:46.620007490Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:08:46.620747 containerd[1540]: time="2025-12-16T13:08:46.620335143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:08:46.621078 kubelet[2726]: E1216 13:08:46.620604 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:08:46.622563 kubelet[2726]: E1216 13:08:46.621722 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:08:46.622563 kubelet[2726]: E1216 13:08:46.621891 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d49c7467-w7dmv_calico-apiserver(c5c3da84-f0c0-494e-b5ab-338a3db3dbfc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:46.622563 kubelet[2726]: E1216 13:08:46.621932 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49c7467-w7dmv" podUID="c5c3da84-f0c0-494e-b5ab-338a3db3dbfc" Dec 16 13:08:46.629801 systemd-networkd[1426]: cali8b18c834fba: Gained IPv6LL Dec 16 13:08:46.691193 containerd[1540]: time="2025-12-16T13:08:46.691016773Z" level=info msg="connecting to shim e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd" address="unix:///run/containerd/s/e359591b714371721ce5355d545b4ff8423fbb272dbb2b23ce1e83b58fb0c4ff" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:46.740651 kubelet[2726]: E1216 13:08:46.739922 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:46.768224 kubelet[2726]: E1216 13:08:46.767946 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49c7467-qxkcz" podUID="3c97abc9-28b5-46fd-ac48-9268ba05dd67" Dec 16 13:08:46.784838 systemd-networkd[1426]: cali5d7171d7c8a: Link UP Dec 16 13:08:46.787587 systemd-networkd[1426]: cali5d7171d7c8a: Gained carrier Dec 16 13:08:46.826631 systemd[1]: Started cri-containerd-e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd.scope - libcontainer container e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd. Dec 16 13:08:46.916712 containerd[1540]: 2025-12-16 13:08:46.365 [INFO][4238] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--e--d5fd5cf192-k8s-calico--kube--controllers--9f569d77f--ndtwq-eth0 calico-kube-controllers-9f569d77f- calico-system 9e067442-1617-4c9f-a618-5f4c28d671bd 898 0 2025-12-16 13:08:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9f569d77f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.2-e-d5fd5cf192 calico-kube-controllers-9f569d77f-ndtwq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5d7171d7c8a [] [] }} ContainerID="2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d" Namespace="calico-system" Pod="calico-kube-controllers-9f569d77f-ndtwq" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--kube--controllers--9f569d77f--ndtwq-" Dec 16 13:08:46.916712 containerd[1540]: 2025-12-16 13:08:46.365 [INFO][4238] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d" Namespace="calico-system" Pod="calico-kube-controllers-9f569d77f-ndtwq" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--kube--controllers--9f569d77f--ndtwq-eth0" Dec 16 13:08:46.916712 containerd[1540]: 2025-12-16 13:08:46.465 [INFO][4254] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d" HandleID="k8s-pod-network.2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-calico--kube--controllers--9f569d77f--ndtwq-eth0" Dec 16 13:08:46.916712 containerd[1540]: 2025-12-16 13:08:46.465 [INFO][4254] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d" HandleID="k8s-pod-network.2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-calico--kube--controllers--9f569d77f--ndtwq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-e-d5fd5cf192", "pod":"calico-kube-controllers-9f569d77f-ndtwq", "timestamp":"2025-12-16 13:08:46.465515145 +0000 UTC"}, Hostname:"ci-4459.2.2-e-d5fd5cf192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:08:46.916712 containerd[1540]: 2025-12-16 13:08:46.465 [INFO][4254] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:08:46.916712 containerd[1540]: 2025-12-16 13:08:46.509 [INFO][4254] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:08:46.916712 containerd[1540]: 2025-12-16 13:08:46.509 [INFO][4254] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-e-d5fd5cf192' Dec 16 13:08:46.916712 containerd[1540]: 2025-12-16 13:08:46.554 [INFO][4254] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:46.916712 containerd[1540]: 2025-12-16 13:08:46.577 [INFO][4254] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:46.916712 containerd[1540]: 2025-12-16 13:08:46.592 [INFO][4254] ipam/ipam.go 511: Trying affinity for 192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:46.916712 containerd[1540]: 2025-12-16 13:08:46.604 [INFO][4254] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:46.916712 containerd[1540]: 2025-12-16 13:08:46.642 [INFO][4254] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:46.916712 containerd[1540]: 2025-12-16 13:08:46.643 [INFO][4254] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.64/26 handle="k8s-pod-network.2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:46.916712 containerd[1540]: 2025-12-16 13:08:46.652 [INFO][4254] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d Dec 16 13:08:46.916712 containerd[1540]: 2025-12-16 13:08:46.665 [INFO][4254] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.64/26 handle="k8s-pod-network.2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:46.916712 containerd[1540]: 2025-12-16 13:08:46.685 [INFO][4254] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.70/26] block=192.168.103.64/26 handle="k8s-pod-network.2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:46.916712 containerd[1540]: 2025-12-16 13:08:46.685 [INFO][4254] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.70/26] handle="k8s-pod-network.2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:46.916712 containerd[1540]: 2025-12-16 13:08:46.686 [INFO][4254] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:08:46.916712 containerd[1540]: 2025-12-16 13:08:46.686 [INFO][4254] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.70/26] IPv6=[] ContainerID="2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d" HandleID="k8s-pod-network.2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-calico--kube--controllers--9f569d77f--ndtwq-eth0" Dec 16 13:08:46.917587 containerd[1540]: 2025-12-16 13:08:46.706 [INFO][4238] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d" Namespace="calico-system" Pod="calico-kube-controllers-9f569d77f-ndtwq" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--kube--controllers--9f569d77f--ndtwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--e--d5fd5cf192-k8s-calico--kube--controllers--9f569d77f--ndtwq-eth0", GenerateName:"calico-kube-controllers-9f569d77f-", Namespace:"calico-system", SelfLink:"", UID:"9e067442-1617-4c9f-a618-5f4c28d671bd", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9f569d77f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-e-d5fd5cf192", ContainerID:"", Pod:"calico-kube-controllers-9f569d77f-ndtwq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5d7171d7c8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:46.917587 containerd[1540]: 2025-12-16 13:08:46.717 [INFO][4238] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.70/32] ContainerID="2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d" Namespace="calico-system" Pod="calico-kube-controllers-9f569d77f-ndtwq" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--kube--controllers--9f569d77f--ndtwq-eth0" Dec 16 13:08:46.917587 containerd[1540]: 2025-12-16 13:08:46.718 [INFO][4238] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d7171d7c8a ContainerID="2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d" Namespace="calico-system" Pod="calico-kube-controllers-9f569d77f-ndtwq" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--kube--controllers--9f569d77f--ndtwq-eth0" Dec 16 13:08:46.917587 containerd[1540]: 2025-12-16 13:08:46.790 [INFO][4238] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d" Namespace="calico-system" Pod="calico-kube-controllers-9f569d77f-ndtwq" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--kube--controllers--9f569d77f--ndtwq-eth0" Dec 16 13:08:46.917587 containerd[1540]: 2025-12-16 13:08:46.799 [INFO][4238] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d" Namespace="calico-system" Pod="calico-kube-controllers-9f569d77f-ndtwq" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--kube--controllers--9f569d77f--ndtwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--e--d5fd5cf192-k8s-calico--kube--controllers--9f569d77f--ndtwq-eth0", GenerateName:"calico-kube-controllers-9f569d77f-", Namespace:"calico-system", SelfLink:"", UID:"9e067442-1617-4c9f-a618-5f4c28d671bd", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9f569d77f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-e-d5fd5cf192", ContainerID:"2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d", Pod:"calico-kube-controllers-9f569d77f-ndtwq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5d7171d7c8a", MAC:"6a:c4:91:3c:75:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:46.917587 containerd[1540]: 2025-12-16 13:08:46.910 [INFO][4238] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d" Namespace="calico-system" Pod="calico-kube-controllers-9f569d77f-ndtwq" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-calico--kube--controllers--9f569d77f--ndtwq-eth0" Dec 16 13:08:47.015926 containerd[1540]: time="2025-12-16T13:08:47.015689188Z" level=info msg="connecting to shim 2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d" address="unix:///run/containerd/s/b997ce5d237ca778fa566f032fe82d49428571319d63b5b4218c2d541d25b545" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:47.030913 containerd[1540]: time="2025-12-16T13:08:47.030792345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pz4l6,Uid:ede2e643-66a7-48be-bdf2-068efa6cf822,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd\"" Dec 16 13:08:47.037743 kubelet[2726]: E1216 13:08:47.037543 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:47.114960 containerd[1540]: time="2025-12-16T13:08:47.114852258Z" level=info msg="CreateContainer within sandbox \"e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:08:47.132374 containerd[1540]: time="2025-12-16T13:08:47.132278701Z" level=info msg="Container 923b947f924849235e8ad301265905bb33e8eb80b7e0423b14c9c65d0fbbabf1: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:47.133531 systemd[1]: Started cri-containerd-2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d.scope - libcontainer container 2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d. Dec 16 13:08:47.150745 containerd[1540]: time="2025-12-16T13:08:47.150690016Z" level=info msg="CreateContainer within sandbox \"e7a9fabcdc93b5248c8aa65fe74cfd06bd44b750c96992f6eba07cc0a48861bd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"923b947f924849235e8ad301265905bb33e8eb80b7e0423b14c9c65d0fbbabf1\"" Dec 16 13:08:47.154754 containerd[1540]: time="2025-12-16T13:08:47.154648836Z" level=info msg="StartContainer for \"923b947f924849235e8ad301265905bb33e8eb80b7e0423b14c9c65d0fbbabf1\"" Dec 16 13:08:47.159575 containerd[1540]: time="2025-12-16T13:08:47.159522321Z" level=info msg="connecting to shim 923b947f924849235e8ad301265905bb33e8eb80b7e0423b14c9c65d0fbbabf1" address="unix:///run/containerd/s/e359591b714371721ce5355d545b4ff8423fbb272dbb2b23ce1e83b58fb0c4ff" protocol=ttrpc version=3 Dec 16 13:08:47.175690 containerd[1540]: time="2025-12-16T13:08:47.175623323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zqzhv,Uid:5092d504-cc04-4db5-bde7-b900923744da,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:47.249954 systemd[1]: Started cri-containerd-923b947f924849235e8ad301265905bb33e8eb80b7e0423b14c9c65d0fbbabf1.scope - libcontainer container 923b947f924849235e8ad301265905bb33e8eb80b7e0423b14c9c65d0fbbabf1. Dec 16 13:08:47.390145 containerd[1540]: time="2025-12-16T13:08:47.390023520Z" level=info msg="StartContainer for \"923b947f924849235e8ad301265905bb33e8eb80b7e0423b14c9c65d0fbbabf1\" returns successfully" Dec 16 13:08:47.393151 kubelet[2726]: I1216 13:08:47.392859 2726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:08:47.394576 kubelet[2726]: E1216 13:08:47.393511 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:47.526616 systemd-networkd[1426]: cali02db6889eb0: Gained IPv6LL Dec 16 13:08:47.630375 containerd[1540]: time="2025-12-16T13:08:47.630276885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9f569d77f-ndtwq,Uid:9e067442-1617-4c9f-a618-5f4c28d671bd,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a393b0a8f7bc0ab8f2cfee11664b683b4d263bcc9a69d02224be8fcbfb3625d\"" Dec 16 13:08:47.637480 containerd[1540]: time="2025-12-16T13:08:47.637390978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:08:47.655551 systemd-networkd[1426]: vxlan.calico: Gained IPv6LL Dec 16 13:08:47.755976 kubelet[2726]: E1216 13:08:47.755941 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:47.771091 kubelet[2726]: E1216 13:08:47.770490 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:47.773378 kubelet[2726]: E1216 13:08:47.773245 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49c7467-qxkcz" podUID="3c97abc9-28b5-46fd-ac48-9268ba05dd67" Dec 16 13:08:47.773566 kubelet[2726]: E1216 13:08:47.772427 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49c7467-w7dmv" podUID="c5c3da84-f0c0-494e-b5ab-338a3db3dbfc" Dec 16 13:08:47.783211 systemd-networkd[1426]: calibc32e7dec9f: Link UP Dec 16 13:08:47.793097 systemd-networkd[1426]: calibc32e7dec9f: Gained carrier Dec 16 13:08:47.845441 containerd[1540]: 2025-12-16 13:08:47.358 [INFO][4382] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--e--d5fd5cf192-k8s-csi--node--driver--zqzhv-eth0 csi-node-driver- calico-system 5092d504-cc04-4db5-bde7-b900923744da 776 0 2025-12-16 13:08:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.2-e-d5fd5cf192 csi-node-driver-zqzhv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibc32e7dec9f [] [] }} ContainerID="a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708" Namespace="calico-system" Pod="csi-node-driver-zqzhv" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-csi--node--driver--zqzhv-" Dec 16 13:08:47.845441 containerd[1540]: 2025-12-16 13:08:47.358 [INFO][4382] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708" Namespace="calico-system" Pod="csi-node-driver-zqzhv" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-csi--node--driver--zqzhv-eth0" Dec 16 13:08:47.845441 containerd[1540]: 2025-12-16 13:08:47.463 [INFO][4429] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708" HandleID="k8s-pod-network.a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-csi--node--driver--zqzhv-eth0" Dec 16 13:08:47.845441 containerd[1540]: 2025-12-16 13:08:47.463 [INFO][4429] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708" HandleID="k8s-pod-network.a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-csi--node--driver--zqzhv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003af620), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-e-d5fd5cf192", "pod":"csi-node-driver-zqzhv", "timestamp":"2025-12-16 13:08:47.463716928 +0000 UTC"}, Hostname:"ci-4459.2.2-e-d5fd5cf192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:08:47.845441 containerd[1540]: 2025-12-16 13:08:47.464 [INFO][4429] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:08:47.845441 containerd[1540]: 2025-12-16 13:08:47.464 [INFO][4429] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:08:47.845441 containerd[1540]: 2025-12-16 13:08:47.464 [INFO][4429] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-e-d5fd5cf192' Dec 16 13:08:47.845441 containerd[1540]: 2025-12-16 13:08:47.599 [INFO][4429] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:47.845441 containerd[1540]: 2025-12-16 13:08:47.671 [INFO][4429] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:47.845441 containerd[1540]: 2025-12-16 13:08:47.692 [INFO][4429] ipam/ipam.go 511: Trying affinity for 192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:47.845441 containerd[1540]: 2025-12-16 13:08:47.697 [INFO][4429] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:47.845441 containerd[1540]: 2025-12-16 13:08:47.704 [INFO][4429] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:47.845441 containerd[1540]: 2025-12-16 13:08:47.704 [INFO][4429] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.64/26 handle="k8s-pod-network.a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:47.845441 containerd[1540]: 2025-12-16 13:08:47.707 [INFO][4429] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708 Dec 16 13:08:47.845441 containerd[1540]: 2025-12-16 13:08:47.725 [INFO][4429] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.64/26 handle="k8s-pod-network.a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:47.845441 containerd[1540]: 2025-12-16 13:08:47.759 [INFO][4429] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.71/26] block=192.168.103.64/26 handle="k8s-pod-network.a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:47.845441 containerd[1540]: 2025-12-16 13:08:47.759 [INFO][4429] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.71/26] handle="k8s-pod-network.a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:47.845441 containerd[1540]: 2025-12-16 13:08:47.759 [INFO][4429] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:08:47.845441 containerd[1540]: 2025-12-16 13:08:47.760 [INFO][4429] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.71/26] IPv6=[] ContainerID="a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708" HandleID="k8s-pod-network.a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-csi--node--driver--zqzhv-eth0" Dec 16 13:08:47.846114 containerd[1540]: 2025-12-16 13:08:47.772 [INFO][4382] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708" Namespace="calico-system" Pod="csi-node-driver-zqzhv" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-csi--node--driver--zqzhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--e--d5fd5cf192-k8s-csi--node--driver--zqzhv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5092d504-cc04-4db5-bde7-b900923744da", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-e-d5fd5cf192", ContainerID:"", Pod:"csi-node-driver-zqzhv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibc32e7dec9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:47.846114 containerd[1540]: 2025-12-16 13:08:47.772 [INFO][4382] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.71/32] ContainerID="a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708" Namespace="calico-system" Pod="csi-node-driver-zqzhv" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-csi--node--driver--zqzhv-eth0" Dec 16 13:08:47.846114 containerd[1540]: 2025-12-16 13:08:47.772 [INFO][4382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc32e7dec9f ContainerID="a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708" Namespace="calico-system" Pod="csi-node-driver-zqzhv" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-csi--node--driver--zqzhv-eth0" Dec 16 13:08:47.846114 containerd[1540]: 2025-12-16 13:08:47.799 [INFO][4382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708" Namespace="calico-system" Pod="csi-node-driver-zqzhv" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-csi--node--driver--zqzhv-eth0" Dec 16 13:08:47.846114 containerd[1540]: 2025-12-16 13:08:47.800 [INFO][4382] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708" Namespace="calico-system" Pod="csi-node-driver-zqzhv" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-csi--node--driver--zqzhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--e--d5fd5cf192-k8s-csi--node--driver--zqzhv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5092d504-cc04-4db5-bde7-b900923744da", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-e-d5fd5cf192", ContainerID:"a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708", Pod:"csi-node-driver-zqzhv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibc32e7dec9f", MAC:"8a:99:69:e0:67:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:47.846114 containerd[1540]: 2025-12-16 13:08:47.837 [INFO][4382] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708" Namespace="calico-system" Pod="csi-node-driver-zqzhv" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-csi--node--driver--zqzhv-eth0" Dec 16 13:08:47.845590 systemd-networkd[1426]: cali216cf21af09: Gained IPv6LL Dec 16 13:08:47.888791 containerd[1540]: time="2025-12-16T13:08:47.888686004Z" level=info msg="connecting to shim a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708" address="unix:///run/containerd/s/ab397cd2addb5c2733f4930a09b0de608331accd1c266865a0f07e0be242df23" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:47.926559 kubelet[2726]: I1216 13:08:47.924503 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pz4l6" podStartSLOduration=46.924471183 podStartE2EDuration="46.924471183s" podCreationTimestamp="2025-12-16 13:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:08:47.839688523 +0000 UTC m=+52.172548678" watchObservedRunningTime="2025-12-16 13:08:47.924471183 +0000 UTC m=+52.257331342" Dec 16 13:08:47.952520 systemd[1]: Started cri-containerd-a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708.scope - libcontainer container a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708. Dec 16 13:08:48.042855 containerd[1540]: time="2025-12-16T13:08:48.042605944Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:48.043869 containerd[1540]: time="2025-12-16T13:08:48.043810097Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:08:48.044175 containerd[1540]: time="2025-12-16T13:08:48.043867851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:08:48.044910 kubelet[2726]: E1216 13:08:48.044779 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:08:48.044910 kubelet[2726]: E1216 13:08:48.044843 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:08:48.045490 kubelet[2726]: E1216 13:08:48.045162 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-9f569d77f-ndtwq_calico-system(9e067442-1617-4c9f-a618-5f4c28d671bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:48.045490 kubelet[2726]: E1216 13:08:48.045222 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9f569d77f-ndtwq" podUID="9e067442-1617-4c9f-a618-5f4c28d671bd" Dec 16 13:08:48.107930 containerd[1540]: time="2025-12-16T13:08:48.107863662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zqzhv,Uid:5092d504-cc04-4db5-bde7-b900923744da,Namespace:calico-system,Attempt:0,} returns sandbox id \"a1124f23fc0ad7bfe9ed5376c2c03f6d61aa2e5202c24c5a4c84ba4435e53708\"" Dec 16 13:08:48.115511 containerd[1540]: time="2025-12-16T13:08:48.115440775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:08:48.183191 containerd[1540]: time="2025-12-16T13:08:48.182913689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tgf4g,Uid:6c535668-f4bd-4af5-9cdf-87c693c12696,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:48.208437 kubelet[2726]: E1216 13:08:48.208324 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:48.360003 systemd-networkd[1426]: cali5d7171d7c8a: Gained IPv6LL Dec 16 13:08:48.470242 containerd[1540]: time="2025-12-16T13:08:48.469899672Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:48.472419 containerd[1540]: time="2025-12-16T13:08:48.471691704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:08:48.472419 containerd[1540]: time="2025-12-16T13:08:48.471928134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:08:48.473149 kubelet[2726]: E1216 13:08:48.472878 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:08:48.473149 kubelet[2726]: E1216 13:08:48.472928 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:08:48.476378 kubelet[2726]: E1216 13:08:48.475407 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-zqzhv_calico-system(5092d504-cc04-4db5-bde7-b900923744da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:48.477585 containerd[1540]: time="2025-12-16T13:08:48.477490968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:08:48.489050 systemd-networkd[1426]: cali85dec4f6e9e: Link UP Dec 16 13:08:48.492403 systemd-networkd[1426]: cali85dec4f6e9e: Gained carrier Dec 16 13:08:48.521892 containerd[1540]: 2025-12-16 13:08:48.314 [INFO][4543] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--e--d5fd5cf192-k8s-goldmane--7c778bb748--tgf4g-eth0 goldmane-7c778bb748- calico-system 6c535668-f4bd-4af5-9cdf-87c693c12696 902 0 2025-12-16 13:08:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.2-e-d5fd5cf192 goldmane-7c778bb748-tgf4g eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali85dec4f6e9e [] [] }} ContainerID="5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386" Namespace="calico-system" Pod="goldmane-7c778bb748-tgf4g" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-goldmane--7c778bb748--tgf4g-" Dec 16 13:08:48.521892 containerd[1540]: 2025-12-16 13:08:48.315 [INFO][4543] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386" Namespace="calico-system" Pod="goldmane-7c778bb748-tgf4g" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-goldmane--7c778bb748--tgf4g-eth0" Dec 16 13:08:48.521892 containerd[1540]: 2025-12-16 13:08:48.379 [INFO][4578] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386" HandleID="k8s-pod-network.5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-goldmane--7c778bb748--tgf4g-eth0" Dec 16 13:08:48.521892 containerd[1540]: 2025-12-16 13:08:48.380 [INFO][4578] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386" HandleID="k8s-pod-network.5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-goldmane--7c778bb748--tgf4g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf200), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-e-d5fd5cf192", "pod":"goldmane-7c778bb748-tgf4g", "timestamp":"2025-12-16 13:08:48.379520437 +0000 UTC"}, Hostname:"ci-4459.2.2-e-d5fd5cf192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:08:48.521892 containerd[1540]: 2025-12-16 13:08:48.381 [INFO][4578] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:08:48.521892 containerd[1540]: 2025-12-16 13:08:48.381 [INFO][4578] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:08:48.521892 containerd[1540]: 2025-12-16 13:08:48.382 [INFO][4578] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-e-d5fd5cf192' Dec 16 13:08:48.521892 containerd[1540]: 2025-12-16 13:08:48.396 [INFO][4578] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:48.521892 containerd[1540]: 2025-12-16 13:08:48.407 [INFO][4578] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:48.521892 containerd[1540]: 2025-12-16 13:08:48.430 [INFO][4578] ipam/ipam.go 511: Trying affinity for 192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:48.521892 containerd[1540]: 2025-12-16 13:08:48.436 [INFO][4578] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:48.521892 containerd[1540]: 2025-12-16 13:08:48.444 [INFO][4578] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.64/26 host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:48.521892 containerd[1540]: 2025-12-16 13:08:48.446 [INFO][4578] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.64/26 handle="k8s-pod-network.5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:48.521892 containerd[1540]: 2025-12-16 13:08:48.451 [INFO][4578] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386 Dec 16 13:08:48.521892 containerd[1540]: 2025-12-16 13:08:48.458 [INFO][4578] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.64/26 handle="k8s-pod-network.5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:48.521892 containerd[1540]: 2025-12-16 13:08:48.468 [INFO][4578] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.72/26] block=192.168.103.64/26 handle="k8s-pod-network.5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:48.521892 containerd[1540]: 2025-12-16 13:08:48.471 [INFO][4578] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.72/26] handle="k8s-pod-network.5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386" host="ci-4459.2.2-e-d5fd5cf192" Dec 16 13:08:48.521892 containerd[1540]: 2025-12-16 13:08:48.473 [INFO][4578] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:08:48.521892 containerd[1540]: 2025-12-16 13:08:48.474 [INFO][4578] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.72/26] IPv6=[] ContainerID="5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386" HandleID="k8s-pod-network.5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386" Workload="ci--4459.2.2--e--d5fd5cf192-k8s-goldmane--7c778bb748--tgf4g-eth0" Dec 16 13:08:48.524424 containerd[1540]: 2025-12-16 13:08:48.483 [INFO][4543] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386" Namespace="calico-system" Pod="goldmane-7c778bb748-tgf4g" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-goldmane--7c778bb748--tgf4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--e--d5fd5cf192-k8s-goldmane--7c778bb748--tgf4g-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"6c535668-f4bd-4af5-9cdf-87c693c12696", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-e-d5fd5cf192", ContainerID:"", Pod:"goldmane-7c778bb748-tgf4g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.103.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali85dec4f6e9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:48.524424 containerd[1540]: 2025-12-16 13:08:48.483 [INFO][4543] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.72/32] ContainerID="5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386" Namespace="calico-system" Pod="goldmane-7c778bb748-tgf4g" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-goldmane--7c778bb748--tgf4g-eth0" Dec 16 13:08:48.524424 containerd[1540]: 2025-12-16 13:08:48.483 [INFO][4543] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85dec4f6e9e ContainerID="5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386" Namespace="calico-system" Pod="goldmane-7c778bb748-tgf4g" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-goldmane--7c778bb748--tgf4g-eth0" Dec 16 13:08:48.524424 containerd[1540]: 2025-12-16 13:08:48.493 [INFO][4543] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386" Namespace="calico-system" Pod="goldmane-7c778bb748-tgf4g" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-goldmane--7c778bb748--tgf4g-eth0" Dec 16 13:08:48.524424 containerd[1540]: 2025-12-16 13:08:48.493 [INFO][4543] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386" Namespace="calico-system" Pod="goldmane-7c778bb748-tgf4g" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-goldmane--7c778bb748--tgf4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--e--d5fd5cf192-k8s-goldmane--7c778bb748--tgf4g-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"6c535668-f4bd-4af5-9cdf-87c693c12696", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-e-d5fd5cf192", ContainerID:"5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386", Pod:"goldmane-7c778bb748-tgf4g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.103.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali85dec4f6e9e", MAC:"12:50:ee:db:b1:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:48.524424 containerd[1540]: 2025-12-16 13:08:48.518 [INFO][4543] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386" Namespace="calico-system" Pod="goldmane-7c778bb748-tgf4g" WorkloadEndpoint="ci--4459.2.2--e--d5fd5cf192-k8s-goldmane--7c778bb748--tgf4g-eth0" Dec 16 13:08:48.576954 containerd[1540]: time="2025-12-16T13:08:48.576873071Z" level=info msg="connecting to shim 5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386" address="unix:///run/containerd/s/8c472c44d5a3d2e3639aeb51b43d57421e5aaaee3d9d4d2b0880f1400f191f14" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:48.632089 systemd[1]: Started cri-containerd-5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386.scope - libcontainer container 5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386. Dec 16 13:08:48.786221 kubelet[2726]: E1216 13:08:48.785234 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:48.788564 kubelet[2726]: E1216 13:08:48.788515 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49c7467-w7dmv" podUID="c5c3da84-f0c0-494e-b5ab-338a3db3dbfc" Dec 16 13:08:48.788861 kubelet[2726]: E1216 13:08:48.788655 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9f569d77f-ndtwq" podUID="9e067442-1617-4c9f-a618-5f4c28d671bd" Dec 16 13:08:48.808943 containerd[1540]: time="2025-12-16T13:08:48.808290824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tgf4g,Uid:6c535668-f4bd-4af5-9cdf-87c693c12696,Namespace:calico-system,Attempt:0,} returns sandbox id \"5cddff1a38c1a44b820806d01bf7648e0bdf6b801780b0ca54f628a3fc9b7386\"" Dec 16 13:08:48.870259 systemd-networkd[1426]: calibc32e7dec9f: Gained IPv6LL Dec 16 13:08:48.952372 containerd[1540]: time="2025-12-16T13:08:48.952290038Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:48.953617 containerd[1540]: time="2025-12-16T13:08:48.953512621Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:08:48.953617 containerd[1540]: time="2025-12-16T13:08:48.953587024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:08:48.954326 kubelet[2726]: E1216 13:08:48.954186 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:08:48.954563 kubelet[2726]: E1216 13:08:48.954384 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:08:48.954852 kubelet[2726]: E1216 13:08:48.954768 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-zqzhv_calico-system(5092d504-cc04-4db5-bde7-b900923744da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:48.955246 kubelet[2726]: E1216 13:08:48.955165 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zqzhv" podUID="5092d504-cc04-4db5-bde7-b900923744da" Dec 16 13:08:48.955433 containerd[1540]: time="2025-12-16T13:08:48.955375025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:08:49.265401 containerd[1540]: time="2025-12-16T13:08:49.265249052Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:49.266737 containerd[1540]: time="2025-12-16T13:08:49.266591629Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:08:49.266737 containerd[1540]: time="2025-12-16T13:08:49.266670989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:08:49.267263 kubelet[2726]: E1216 13:08:49.267199 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:08:49.267631 kubelet[2726]: E1216 13:08:49.267499 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:08:49.267892 kubelet[2726]: E1216 13:08:49.267853 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tgf4g_calico-system(6c535668-f4bd-4af5-9cdf-87c693c12696): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:49.269426 kubelet[2726]: E1216 13:08:49.268094 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tgf4g" podUID="6c535668-f4bd-4af5-9cdf-87c693c12696" Dec 16 13:08:49.637592 systemd-networkd[1426]: cali85dec4f6e9e: Gained IPv6LL Dec 16 13:08:49.793091 kubelet[2726]: E1216 13:08:49.793011 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:08:49.795867 kubelet[2726]: E1216 13:08:49.795632 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tgf4g" podUID="6c535668-f4bd-4af5-9cdf-87c693c12696" Dec 16 13:08:49.796131 kubelet[2726]: E1216 13:08:49.796063 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zqzhv" podUID="5092d504-cc04-4db5-bde7-b900923744da" Dec 16 13:08:50.795124 kubelet[2726]: E1216 13:08:50.795050 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tgf4g" podUID="6c535668-f4bd-4af5-9cdf-87c693c12696" Dec 16 13:08:57.940934 systemd[1]: Started sshd@7-143.198.151.179:22-139.178.68.195:59538.service - OpenSSH per-connection server daemon (139.178.68.195:59538). Dec 16 13:08:58.157324 sshd[4673]: Accepted publickey for core from 139.178.68.195 port 59538 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:08:58.161811 sshd-session[4673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:58.175083 systemd-logind[1522]: New session 8 of user core. Dec 16 13:08:58.178672 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:08:58.180811 containerd[1540]: time="2025-12-16T13:08:58.179973281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:08:58.532840 containerd[1540]: time="2025-12-16T13:08:58.532661209Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:58.535107 containerd[1540]: time="2025-12-16T13:08:58.534925784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:08:58.535107 containerd[1540]: time="2025-12-16T13:08:58.535033280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:08:58.535898 kubelet[2726]: E1216 13:08:58.535613 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:08:58.535898 kubelet[2726]: E1216 13:08:58.535732 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:08:58.538565 kubelet[2726]: E1216 13:08:58.535894 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8c8d47c9c-47lfm_calico-system(7c8a8da2-19eb-4046-963e-f1ae60b760a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:58.540842 containerd[1540]: time="2025-12-16T13:08:58.540766621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:08:58.772230 sshd[4679]: Connection closed by 139.178.68.195 port 59538 Dec 16 13:08:58.775610 sshd-session[4673]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:58.782469 systemd[1]: sshd@7-143.198.151.179:22-139.178.68.195:59538.service: Deactivated successfully. Dec 16 13:08:58.789312 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:08:58.795798 systemd-logind[1522]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:08:58.801741 systemd-logind[1522]: Removed session 8. Dec 16 13:08:58.949135 containerd[1540]: time="2025-12-16T13:08:58.949066716Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:58.951076 containerd[1540]: time="2025-12-16T13:08:58.950931263Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:08:58.951076 containerd[1540]: time="2025-12-16T13:08:58.951003733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:08:58.952053 kubelet[2726]: E1216 13:08:58.951658 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:08:58.952053 kubelet[2726]: E1216 13:08:58.951736 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:08:58.952053 kubelet[2726]: E1216 13:08:58.951895 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8c8d47c9c-47lfm_calico-system(7c8a8da2-19eb-4046-963e-f1ae60b760a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:58.953414 kubelet[2726]: E1216 13:08:58.951965 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8c8d47c9c-47lfm" podUID="7c8a8da2-19eb-4046-963e-f1ae60b760a8" Dec 16 13:09:00.173403 containerd[1540]: time="2025-12-16T13:09:00.172152848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:09:00.519612 containerd[1540]: time="2025-12-16T13:09:00.517737282Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:00.519612 containerd[1540]: time="2025-12-16T13:09:00.519293963Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:09:00.519612 containerd[1540]: time="2025-12-16T13:09:00.519417062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:09:00.519944 kubelet[2726]: E1216 13:09:00.519804 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:00.519944 kubelet[2726]: E1216 13:09:00.519897 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:00.520507 kubelet[2726]: E1216 13:09:00.520118 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d49c7467-qxkcz_calico-apiserver(3c97abc9-28b5-46fd-ac48-9268ba05dd67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:00.520507 kubelet[2726]: E1216 13:09:00.520161 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49c7467-qxkcz" podUID="3c97abc9-28b5-46fd-ac48-9268ba05dd67" Dec 16 13:09:01.172775 containerd[1540]: time="2025-12-16T13:09:01.172711551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:09:01.493885 containerd[1540]: time="2025-12-16T13:09:01.493423073Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:01.494594 containerd[1540]: time="2025-12-16T13:09:01.494432234Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:09:01.494594 containerd[1540]: time="2025-12-16T13:09:01.494548222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:09:01.495147 kubelet[2726]: E1216 13:09:01.495087 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:09:01.495372 kubelet[2726]: E1216 13:09:01.495286 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:09:01.495863 kubelet[2726]: E1216 13:09:01.495833 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-zqzhv_calico-system(5092d504-cc04-4db5-bde7-b900923744da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:01.501384 containerd[1540]: time="2025-12-16T13:09:01.500486621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:09:01.862227 containerd[1540]: time="2025-12-16T13:09:01.862149154Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:01.863825 containerd[1540]: time="2025-12-16T13:09:01.863746714Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:09:01.866553 containerd[1540]: time="2025-12-16T13:09:01.863910939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:09:01.866642 kubelet[2726]: E1216 13:09:01.864088 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:09:01.866642 kubelet[2726]: E1216 13:09:01.864159 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:09:01.866642 kubelet[2726]: E1216 13:09:01.864285 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-zqzhv_calico-system(5092d504-cc04-4db5-bde7-b900923744da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:01.867436 kubelet[2726]: E1216 13:09:01.867301 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zqzhv" podUID="5092d504-cc04-4db5-bde7-b900923744da" Dec 16 13:09:02.174165 containerd[1540]: time="2025-12-16T13:09:02.172687433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:09:02.547045 containerd[1540]: time="2025-12-16T13:09:02.546566005Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:02.548020 containerd[1540]: time="2025-12-16T13:09:02.547966081Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:09:02.548109 containerd[1540]: time="2025-12-16T13:09:02.548089121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:09:02.548641 kubelet[2726]: E1216 13:09:02.548549 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:02.548893 kubelet[2726]: E1216 13:09:02.548737 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:02.549525 kubelet[2726]: E1216 13:09:02.549459 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d49c7467-w7dmv_calico-apiserver(c5c3da84-f0c0-494e-b5ab-338a3db3dbfc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:02.550139 kubelet[2726]: E1216 13:09:02.549508 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49c7467-w7dmv" podUID="c5c3da84-f0c0-494e-b5ab-338a3db3dbfc" Dec 16 13:09:02.551198 containerd[1540]: time="2025-12-16T13:09:02.551027037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:09:02.852379 containerd[1540]: time="2025-12-16T13:09:02.850534308Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:02.855164 containerd[1540]: time="2025-12-16T13:09:02.854975572Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:09:02.855164 containerd[1540]: time="2025-12-16T13:09:02.855038930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:09:02.856426 kubelet[2726]: E1216 13:09:02.855362 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:09:02.856426 kubelet[2726]: E1216 13:09:02.855436 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:09:02.856426 kubelet[2726]: E1216 13:09:02.855555 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-9f569d77f-ndtwq_calico-system(9e067442-1617-4c9f-a618-5f4c28d671bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:02.856426 kubelet[2726]: E1216 13:09:02.855614 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9f569d77f-ndtwq" podUID="9e067442-1617-4c9f-a618-5f4c28d671bd" Dec 16 13:09:03.792911 systemd[1]: Started sshd@8-143.198.151.179:22-139.178.68.195:48072.service - OpenSSH per-connection server daemon (139.178.68.195:48072). Dec 16 13:09:03.914773 sshd[4698]: Accepted publickey for core from 139.178.68.195 port 48072 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:09:03.917784 sshd-session[4698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:03.928864 systemd-logind[1522]: New session 9 of user core. Dec 16 13:09:03.934640 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:09:04.248932 sshd[4701]: Connection closed by 139.178.68.195 port 48072 Dec 16 13:09:04.251491 sshd-session[4698]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:04.261705 systemd[1]: sshd@8-143.198.151.179:22-139.178.68.195:48072.service: Deactivated successfully. Dec 16 13:09:04.267357 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:09:04.269178 systemd-logind[1522]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:09:04.274219 systemd-logind[1522]: Removed session 9. Dec 16 13:09:05.173749 containerd[1540]: time="2025-12-16T13:09:05.173025927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:09:05.528567 containerd[1540]: time="2025-12-16T13:09:05.528086601Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:05.531369 containerd[1540]: time="2025-12-16T13:09:05.531276714Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:09:05.531721 containerd[1540]: time="2025-12-16T13:09:05.531687489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:09:05.532198 kubelet[2726]: E1216 13:09:05.532155 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:09:05.532899 kubelet[2726]: E1216 13:09:05.532689 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:09:05.532899 kubelet[2726]: E1216 13:09:05.532808 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tgf4g_calico-system(6c535668-f4bd-4af5-9cdf-87c693c12696): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:05.532899 kubelet[2726]: E1216 13:09:05.532852 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tgf4g" podUID="6c535668-f4bd-4af5-9cdf-87c693c12696" Dec 16 13:09:07.169997 kubelet[2726]: E1216 13:09:07.169570 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:09:07.173854 kubelet[2726]: E1216 13:09:07.173736 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:09:09.169178 kubelet[2726]: E1216 13:09:09.169124 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:09:09.270647 systemd[1]: Started sshd@9-143.198.151.179:22-139.178.68.195:48078.service - OpenSSH per-connection server daemon (139.178.68.195:48078). Dec 16 13:09:09.372385 sshd[4722]: Accepted publickey for core from 139.178.68.195 port 48078 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:09:09.375710 sshd-session[4722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:09.386148 systemd-logind[1522]: New session 10 of user core. Dec 16 13:09:09.394647 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:09:09.571247 sshd[4725]: Connection closed by 139.178.68.195 port 48078 Dec 16 13:09:09.572640 sshd-session[4722]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:09.585941 systemd[1]: sshd@9-143.198.151.179:22-139.178.68.195:48078.service: Deactivated successfully. Dec 16 13:09:09.592266 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:09:09.596571 systemd-logind[1522]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:09:09.601964 systemd[1]: Started sshd@10-143.198.151.179:22-139.178.68.195:48086.service - OpenSSH per-connection server daemon (139.178.68.195:48086). Dec 16 13:09:09.604745 systemd-logind[1522]: Removed session 10. Dec 16 13:09:09.691702 sshd[4738]: Accepted publickey for core from 139.178.68.195 port 48086 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:09:09.694641 sshd-session[4738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:09.705750 systemd-logind[1522]: New session 11 of user core. Dec 16 13:09:09.712639 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:09:10.053643 sshd[4741]: Connection closed by 139.178.68.195 port 48086 Dec 16 13:09:10.054521 sshd-session[4738]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:10.072657 systemd[1]: sshd@10-143.198.151.179:22-139.178.68.195:48086.service: Deactivated successfully. Dec 16 13:09:10.078329 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:09:10.082938 systemd-logind[1522]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:09:10.091822 systemd[1]: Started sshd@11-143.198.151.179:22-139.178.68.195:48088.service - OpenSSH per-connection server daemon (139.178.68.195:48088). Dec 16 13:09:10.096449 systemd-logind[1522]: Removed session 11. Dec 16 13:09:10.211669 sshd[4751]: Accepted publickey for core from 139.178.68.195 port 48088 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:09:10.214483 sshd-session[4751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:10.227788 systemd-logind[1522]: New session 12 of user core. Dec 16 13:09:10.233322 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:09:10.424092 sshd[4754]: Connection closed by 139.178.68.195 port 48088 Dec 16 13:09:10.425074 sshd-session[4751]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:10.433529 systemd[1]: sshd@11-143.198.151.179:22-139.178.68.195:48088.service: Deactivated successfully. Dec 16 13:09:10.438726 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:09:10.443468 systemd-logind[1522]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:09:10.445356 systemd-logind[1522]: Removed session 12. Dec 16 13:09:11.173447 kubelet[2726]: E1216 13:09:11.173259 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8c8d47c9c-47lfm" podUID="7c8a8da2-19eb-4046-963e-f1ae60b760a8" Dec 16 13:09:13.169996 kubelet[2726]: E1216 13:09:13.169954 2726 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 16 13:09:14.173378 kubelet[2726]: E1216 13:09:14.172189 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49c7467-qxkcz" podUID="3c97abc9-28b5-46fd-ac48-9268ba05dd67" Dec 16 13:09:15.448283 systemd[1]: Started sshd@12-143.198.151.179:22-139.178.68.195:46590.service - OpenSSH per-connection server daemon (139.178.68.195:46590). Dec 16 13:09:15.638416 sshd[4770]: Accepted publickey for core from 139.178.68.195 port 46590 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:09:15.640885 sshd-session[4770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:15.649187 systemd-logind[1522]: New session 13 of user core. Dec 16 13:09:15.656847 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:09:15.922697 sshd[4775]: Connection closed by 139.178.68.195 port 46590 Dec 16 13:09:15.926557 sshd-session[4770]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:15.937036 systemd[1]: sshd@12-143.198.151.179:22-139.178.68.195:46590.service: Deactivated successfully. Dec 16 13:09:15.943989 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:09:15.948137 systemd-logind[1522]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:09:15.951784 systemd-logind[1522]: Removed session 13. Dec 16 13:09:16.173474 kubelet[2726]: E1216 13:09:16.172829 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9f569d77f-ndtwq" podUID="9e067442-1617-4c9f-a618-5f4c28d671bd" Dec 16 13:09:16.175176 kubelet[2726]: E1216 13:09:16.174964 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zqzhv" podUID="5092d504-cc04-4db5-bde7-b900923744da" Dec 16 13:09:17.174384 kubelet[2726]: E1216 13:09:17.172157 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49c7467-w7dmv" podUID="c5c3da84-f0c0-494e-b5ab-338a3db3dbfc" Dec 16 13:09:20.174955 kubelet[2726]: E1216 13:09:20.174809 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tgf4g" podUID="6c535668-f4bd-4af5-9cdf-87c693c12696" Dec 16 13:09:20.942566 systemd[1]: Started sshd@13-143.198.151.179:22-139.178.68.195:43414.service - OpenSSH per-connection server daemon (139.178.68.195:43414). Dec 16 13:09:21.097075 sshd[4816]: Accepted publickey for core from 139.178.68.195 port 43414 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:09:21.101247 sshd-session[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:21.110055 systemd-logind[1522]: New session 14 of user core. Dec 16 13:09:21.116844 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:09:21.356402 sshd[4819]: Connection closed by 139.178.68.195 port 43414 Dec 16 13:09:21.357602 sshd-session[4816]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:21.366113 systemd[1]: sshd@13-143.198.151.179:22-139.178.68.195:43414.service: Deactivated successfully. Dec 16 13:09:21.366280 systemd-logind[1522]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:09:21.370486 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:09:21.374786 systemd-logind[1522]: Removed session 14. Dec 16 13:09:25.170133 containerd[1540]: time="2025-12-16T13:09:25.170043542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:09:25.529687 containerd[1540]: time="2025-12-16T13:09:25.529338460Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:25.530815 containerd[1540]: time="2025-12-16T13:09:25.530681496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:09:25.530815 containerd[1540]: time="2025-12-16T13:09:25.530769045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:09:25.531178 kubelet[2726]: E1216 13:09:25.531023 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:09:25.531178 kubelet[2726]: E1216 13:09:25.531096 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:09:25.531710 kubelet[2726]: E1216 13:09:25.531209 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8c8d47c9c-47lfm_calico-system(7c8a8da2-19eb-4046-963e-f1ae60b760a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:25.534804 containerd[1540]: time="2025-12-16T13:09:25.534758228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:09:25.867485 containerd[1540]: time="2025-12-16T13:09:25.867411926Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:25.869109 containerd[1540]: time="2025-12-16T13:09:25.868968385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:09:25.869109 containerd[1540]: time="2025-12-16T13:09:25.869071450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:09:25.869402 kubelet[2726]: E1216 13:09:25.869243 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:09:25.869402 kubelet[2726]: E1216 13:09:25.869320 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:09:25.870465 kubelet[2726]: E1216 13:09:25.869526 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8c8d47c9c-47lfm_calico-system(7c8a8da2-19eb-4046-963e-f1ae60b760a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:25.870465 kubelet[2726]: E1216 13:09:25.869574 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8c8d47c9c-47lfm" podUID="7c8a8da2-19eb-4046-963e-f1ae60b760a8" Dec 16 13:09:26.174823 containerd[1540]: time="2025-12-16T13:09:26.174609871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:09:26.374640 systemd[1]: Started sshd@14-143.198.151.179:22-139.178.68.195:43420.service - OpenSSH per-connection server daemon (139.178.68.195:43420). Dec 16 13:09:26.519842 sshd[4831]: Accepted publickey for core from 139.178.68.195 port 43420 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:09:26.522608 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:26.527765 containerd[1540]: time="2025-12-16T13:09:26.527547649Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:26.528768 containerd[1540]: time="2025-12-16T13:09:26.528500272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:09:26.528768 containerd[1540]: time="2025-12-16T13:09:26.528629973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:09:26.529641 kubelet[2726]: E1216 13:09:26.529579 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:26.529735 kubelet[2726]: E1216 13:09:26.529650 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:26.531521 kubelet[2726]: E1216 13:09:26.531199 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d49c7467-qxkcz_calico-apiserver(3c97abc9-28b5-46fd-ac48-9268ba05dd67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:26.531892 kubelet[2726]: E1216 13:09:26.531815 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49c7467-qxkcz" podUID="3c97abc9-28b5-46fd-ac48-9268ba05dd67" Dec 16 13:09:26.532928 systemd-logind[1522]: New session 15 of user core. Dec 16 13:09:26.539676 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:09:26.844819 sshd[4834]: Connection closed by 139.178.68.195 port 43420 Dec 16 13:09:26.845752 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:26.854098 systemd[1]: sshd@14-143.198.151.179:22-139.178.68.195:43420.service: Deactivated successfully. Dec 16 13:09:26.855448 systemd-logind[1522]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:09:26.857567 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:09:26.864736 systemd-logind[1522]: Removed session 15. Dec 16 13:09:27.173472 containerd[1540]: time="2025-12-16T13:09:27.172032834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:09:27.538243 containerd[1540]: time="2025-12-16T13:09:27.537885968Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:27.539549 containerd[1540]: time="2025-12-16T13:09:27.539005400Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:09:27.539549 containerd[1540]: time="2025-12-16T13:09:27.539062736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:09:27.540545 kubelet[2726]: E1216 13:09:27.540416 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:09:27.540545 kubelet[2726]: E1216 13:09:27.540497 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:09:27.542714 kubelet[2726]: E1216 13:09:27.540625 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-9f569d77f-ndtwq_calico-system(9e067442-1617-4c9f-a618-5f4c28d671bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:27.542714 kubelet[2726]: E1216 13:09:27.540676 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9f569d77f-ndtwq" podUID="9e067442-1617-4c9f-a618-5f4c28d671bd" Dec 16 13:09:29.173093 containerd[1540]: time="2025-12-16T13:09:29.173035230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:09:29.520966 containerd[1540]: time="2025-12-16T13:09:29.520537030Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:29.521564 containerd[1540]: time="2025-12-16T13:09:29.521493488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:09:29.521703 containerd[1540]: time="2025-12-16T13:09:29.521604462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:09:29.522131 kubelet[2726]: E1216 13:09:29.522017 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:09:29.523479 kubelet[2726]: E1216 13:09:29.522645 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:09:29.523479 kubelet[2726]: E1216 13:09:29.522893 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-zqzhv_calico-system(5092d504-cc04-4db5-bde7-b900923744da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:29.524584 containerd[1540]: time="2025-12-16T13:09:29.524537996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:09:29.827027 containerd[1540]: time="2025-12-16T13:09:29.826462065Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:29.827704 containerd[1540]: time="2025-12-16T13:09:29.827600025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:09:29.827884 containerd[1540]: time="2025-12-16T13:09:29.827673525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:09:29.828269 kubelet[2726]: E1216 13:09:29.828146 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:09:29.828486 kubelet[2726]: E1216 13:09:29.828241 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:09:29.830314 kubelet[2726]: E1216 13:09:29.828760 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-zqzhv_calico-system(5092d504-cc04-4db5-bde7-b900923744da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:29.830314 kubelet[2726]: E1216 13:09:29.828927 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zqzhv" podUID="5092d504-cc04-4db5-bde7-b900923744da" Dec 16 13:09:30.174389 containerd[1540]: time="2025-12-16T13:09:30.172495357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:09:30.515546 containerd[1540]: time="2025-12-16T13:09:30.514827595Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:30.516706 containerd[1540]: time="2025-12-16T13:09:30.516617445Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:09:30.517011 containerd[1540]: time="2025-12-16T13:09:30.516622451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:09:30.517380 kubelet[2726]: E1216 13:09:30.517241 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:30.517380 kubelet[2726]: E1216 13:09:30.517318 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:30.517529 kubelet[2726]: E1216 13:09:30.517442 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d49c7467-w7dmv_calico-apiserver(c5c3da84-f0c0-494e-b5ab-338a3db3dbfc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:30.517529 kubelet[2726]: E1216 13:09:30.517481 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49c7467-w7dmv" podUID="c5c3da84-f0c0-494e-b5ab-338a3db3dbfc" Dec 16 13:09:31.860666 systemd[1]: Started sshd@15-143.198.151.179:22-139.178.68.195:60322.service - OpenSSH per-connection server daemon (139.178.68.195:60322). Dec 16 13:09:31.934138 sshd[4854]: Accepted publickey for core from 139.178.68.195 port 60322 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:09:31.936642 sshd-session[4854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:31.945839 systemd-logind[1522]: New session 16 of user core. Dec 16 13:09:31.952903 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:09:32.163491 sshd[4857]: Connection closed by 139.178.68.195 port 60322 Dec 16 13:09:32.162317 sshd-session[4854]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:32.181881 systemd[1]: sshd@15-143.198.151.179:22-139.178.68.195:60322.service: Deactivated successfully. Dec 16 13:09:32.188075 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:09:32.191581 systemd-logind[1522]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:09:32.196697 systemd[1]: Started sshd@16-143.198.151.179:22-139.178.68.195:60332.service - OpenSSH per-connection server daemon (139.178.68.195:60332). Dec 16 13:09:32.203849 systemd-logind[1522]: Removed session 16. Dec 16 13:09:32.294025 sshd[4869]: Accepted publickey for core from 139.178.68.195 port 60332 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:09:32.296055 sshd-session[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:32.302957 systemd-logind[1522]: New session 17 of user core. Dec 16 13:09:32.311656 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:09:32.658794 sshd[4872]: Connection closed by 139.178.68.195 port 60332 Dec 16 13:09:32.664341 sshd-session[4869]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:32.685394 systemd[1]: Started sshd@17-143.198.151.179:22-139.178.68.195:60338.service - OpenSSH per-connection server daemon (139.178.68.195:60338). Dec 16 13:09:32.686257 systemd[1]: sshd@16-143.198.151.179:22-139.178.68.195:60332.service: Deactivated successfully. Dec 16 13:09:32.697207 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:09:32.704618 systemd-logind[1522]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:09:32.708446 systemd-logind[1522]: Removed session 17. Dec 16 13:09:32.821079 sshd[4883]: Accepted publickey for core from 139.178.68.195 port 60338 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:09:32.823503 sshd-session[4883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:32.831766 systemd-logind[1522]: New session 18 of user core. Dec 16 13:09:32.837028 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:09:33.173099 containerd[1540]: time="2025-12-16T13:09:33.172646670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:09:33.522694 containerd[1540]: time="2025-12-16T13:09:33.522510468Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:33.523667 containerd[1540]: time="2025-12-16T13:09:33.523584474Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:09:33.523820 containerd[1540]: time="2025-12-16T13:09:33.523693151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:09:33.523980 kubelet[2726]: E1216 13:09:33.523933 2726 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:09:33.524619 kubelet[2726]: E1216 13:09:33.523999 2726 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:09:33.524619 kubelet[2726]: E1216 13:09:33.524107 2726 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tgf4g_calico-system(6c535668-f4bd-4af5-9cdf-87c693c12696): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:33.524619 kubelet[2726]: E1216 13:09:33.524152 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tgf4g" podUID="6c535668-f4bd-4af5-9cdf-87c693c12696" Dec 16 13:09:33.916026 sshd[4889]: Connection closed by 139.178.68.195 port 60338 Dec 16 13:09:33.919066 sshd-session[4883]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:33.936961 systemd[1]: Started sshd@18-143.198.151.179:22-139.178.68.195:60346.service - OpenSSH per-connection server daemon (139.178.68.195:60346). Dec 16 13:09:33.938778 systemd[1]: sshd@17-143.198.151.179:22-139.178.68.195:60338.service: Deactivated successfully. Dec 16 13:09:33.953614 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:09:33.962109 systemd-logind[1522]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:09:33.967801 systemd-logind[1522]: Removed session 18. Dec 16 13:09:34.102787 sshd[4901]: Accepted publickey for core from 139.178.68.195 port 60346 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:09:34.107810 sshd-session[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:34.120201 systemd-logind[1522]: New session 19 of user core. Dec 16 13:09:34.125594 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:09:34.777685 sshd[4907]: Connection closed by 139.178.68.195 port 60346 Dec 16 13:09:34.779179 sshd-session[4901]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:34.793171 systemd[1]: sshd@18-143.198.151.179:22-139.178.68.195:60346.service: Deactivated successfully. Dec 16 13:09:34.799304 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:09:34.802398 systemd-logind[1522]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:09:34.812246 systemd[1]: Started sshd@19-143.198.151.179:22-139.178.68.195:60356.service - OpenSSH per-connection server daemon (139.178.68.195:60356). Dec 16 13:09:34.813484 systemd-logind[1522]: Removed session 19. Dec 16 13:09:34.890305 sshd[4917]: Accepted publickey for core from 139.178.68.195 port 60356 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:09:34.892059 sshd-session[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:34.901531 systemd-logind[1522]: New session 20 of user core. Dec 16 13:09:34.906616 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:09:35.101582 sshd[4920]: Connection closed by 139.178.68.195 port 60356 Dec 16 13:09:35.102597 sshd-session[4917]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:35.109180 systemd[1]: sshd@19-143.198.151.179:22-139.178.68.195:60356.service: Deactivated successfully. Dec 16 13:09:35.112772 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:09:35.116514 systemd-logind[1522]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:09:35.118837 systemd-logind[1522]: Removed session 20. Dec 16 13:09:38.172242 kubelet[2726]: E1216 13:09:38.171666 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49c7467-qxkcz" podUID="3c97abc9-28b5-46fd-ac48-9268ba05dd67" Dec 16 13:09:40.119132 systemd[1]: Started sshd@20-143.198.151.179:22-139.178.68.195:40668.service - OpenSSH per-connection server daemon (139.178.68.195:40668). Dec 16 13:09:40.178849 kubelet[2726]: E1216 13:09:40.177787 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9f569d77f-ndtwq" podUID="9e067442-1617-4c9f-a618-5f4c28d671bd" Dec 16 13:09:40.183430 kubelet[2726]: E1216 13:09:40.183251 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8c8d47c9c-47lfm" podUID="7c8a8da2-19eb-4046-963e-f1ae60b760a8" Dec 16 13:09:40.207149 sshd[4936]: Accepted publickey for core from 139.178.68.195 port 40668 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:09:40.211086 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:40.228707 systemd-logind[1522]: New session 21 of user core. Dec 16 13:09:40.236828 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:09:40.421245 sshd[4939]: Connection closed by 139.178.68.195 port 40668 Dec 16 13:09:40.422919 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:40.432994 systemd[1]: sshd@20-143.198.151.179:22-139.178.68.195:40668.service: Deactivated successfully. Dec 16 13:09:40.433396 systemd-logind[1522]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:09:40.440637 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:09:40.444897 systemd-logind[1522]: Removed session 21. Dec 16 13:09:42.172059 kubelet[2726]: E1216 13:09:42.171934 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zqzhv" podUID="5092d504-cc04-4db5-bde7-b900923744da" Dec 16 13:09:44.176381 kubelet[2726]: E1216 13:09:44.176312 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49c7467-w7dmv" podUID="c5c3da84-f0c0-494e-b5ab-338a3db3dbfc" Dec 16 13:09:45.170974 kubelet[2726]: E1216 13:09:45.170876 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tgf4g" podUID="6c535668-f4bd-4af5-9cdf-87c693c12696" Dec 16 13:09:45.438401 systemd[1]: Started sshd@21-143.198.151.179:22-139.178.68.195:40682.service - OpenSSH per-connection server daemon (139.178.68.195:40682). Dec 16 13:09:45.578090 sshd[4953]: Accepted publickey for core from 139.178.68.195 port 40682 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:09:45.581088 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:45.589075 systemd-logind[1522]: New session 22 of user core. Dec 16 13:09:45.595728 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:09:45.897962 sshd[4956]: Connection closed by 139.178.68.195 port 40682 Dec 16 13:09:45.898437 sshd-session[4953]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:45.906560 systemd[1]: sshd@21-143.198.151.179:22-139.178.68.195:40682.service: Deactivated successfully. Dec 16 13:09:45.910836 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:09:45.912413 systemd-logind[1522]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:09:45.914524 systemd-logind[1522]: Removed session 22. Dec 16 13:09:50.924855 systemd[1]: Started sshd@22-143.198.151.179:22-139.178.68.195:40680.service - OpenSSH per-connection server daemon (139.178.68.195:40680). Dec 16 13:09:51.057391 sshd[4992]: Accepted publickey for core from 139.178.68.195 port 40680 ssh2: RSA SHA256:TIdcTyHOx+D1xZ5ZenqZipr6nxqWJcVoo68o1Z2cWQI Dec 16 13:09:51.061064 sshd-session[4992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:51.072602 systemd-logind[1522]: New session 23 of user core. Dec 16 13:09:51.077945 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:09:51.297245 sshd[4995]: Connection closed by 139.178.68.195 port 40680 Dec 16 13:09:51.298526 sshd-session[4992]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:51.304770 systemd-logind[1522]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:09:51.305157 systemd[1]: sshd@22-143.198.151.179:22-139.178.68.195:40680.service: Deactivated successfully. Dec 16 13:09:51.310674 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:09:51.317281 systemd-logind[1522]: Removed session 23. Dec 16 13:09:52.173386 kubelet[2726]: E1216 13:09:52.171807 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9f569d77f-ndtwq" podUID="9e067442-1617-4c9f-a618-5f4c28d671bd" Dec 16 13:09:52.180377 kubelet[2726]: E1216 13:09:52.179072 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8c8d47c9c-47lfm" podUID="7c8a8da2-19eb-4046-963e-f1ae60b760a8" Dec 16 13:09:53.172084 kubelet[2726]: E1216 13:09:53.172017 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49c7467-qxkcz" podUID="3c97abc9-28b5-46fd-ac48-9268ba05dd67" Dec 16 13:09:53.174618 kubelet[2726]: E1216 13:09:53.174505 2726 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zqzhv" podUID="5092d504-cc04-4db5-bde7-b900923744da"