Dec 12 18:38:39.946626 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 12 18:38:39.946669 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:38:39.946689 kernel: BIOS-provided physical RAM map: Dec 12 18:38:39.946701 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 12 18:38:39.946712 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 12 18:38:39.946723 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 12 18:38:39.946737 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Dec 12 18:38:39.946760 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Dec 12 18:38:39.946772 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:38:39.946784 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 12 18:38:39.946795 kernel: NX (Execute Disable) protection: active Dec 12 18:38:39.946812 kernel: APIC: Static calls initialized Dec 12 18:38:39.946824 kernel: SMBIOS 2.8 present. Dec 12 18:38:39.946837 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Dec 12 18:38:39.946852 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:38:39.946865 kernel: Hypervisor detected: KVM Dec 12 18:38:39.946886 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 12 18:38:39.946899 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:38:39.946912 kernel: kvm-clock: using sched offset of 4768799748 cycles Dec 12 18:38:39.946926 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:38:39.946940 kernel: tsc: Detected 2494.140 MHz processor Dec 12 18:38:39.946953 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:38:39.946966 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:38:39.946977 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 12 18:38:39.946991 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 12 18:38:39.947004 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:38:39.947022 kernel: ACPI: Early table checksum verification disabled Dec 12 18:38:39.947122 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Dec 12 18:38:39.947135 kernel: ACPI: RSDT 0x000000007FFE19FD 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:39.947149 kernel: ACPI: FACP 0x000000007FFE17E1 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:39.947162 kernel: ACPI: DSDT 0x000000007FFE0040 0017A1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:39.947176 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 12 18:38:39.947189 kernel: ACPI: APIC 0x000000007FFE1855 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:39.947203 kernel: ACPI: HPET 0x000000007FFE18D5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:39.947220 kernel: ACPI: SRAT 0x000000007FFE190D 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:39.947234 kernel: ACPI: WAET 0x000000007FFE19D5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:39.947247 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe17e1-0x7ffe1854] Dec 12 18:38:39.947261 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe17e0] Dec 12 18:38:39.947274 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 12 18:38:39.947288 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe1855-0x7ffe18d4] Dec 12 18:38:39.947308 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe18d5-0x7ffe190c] Dec 12 18:38:39.947325 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe190d-0x7ffe19d4] Dec 12 18:38:39.947339 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe19d5-0x7ffe19fc] Dec 12 18:38:39.947354 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 12 18:38:39.947368 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 12 18:38:39.947382 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Dec 12 18:38:39.947397 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Dec 12 18:38:39.947411 kernel: Zone ranges: Dec 12 18:38:39.947426 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:38:39.947444 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Dec 12 18:38:39.947458 kernel: Normal empty Dec 12 18:38:39.947472 kernel: Device empty Dec 12 18:38:39.947486 kernel: Movable zone start for each node Dec 12 18:38:39.947500 kernel: Early memory node ranges Dec 12 18:38:39.947515 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 12 18:38:39.948086 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Dec 12 18:38:39.948111 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Dec 12 18:38:39.948126 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:38:39.948165 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 12 18:38:39.948180 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Dec 12 18:38:39.948194 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 12 18:38:39.948215 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:38:39.948230 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:38:39.948247 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 12 18:38:39.948262 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:38:39.948276 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:38:39.948294 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:38:39.948314 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:38:39.948328 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:38:39.948343 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 12 18:38:39.948357 kernel: TSC deadline timer available Dec 12 18:38:39.948371 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:38:39.948385 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:38:39.948399 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:38:39.948414 kernel: CPU topo: Max. threads per core: 1 Dec 12 18:38:39.948428 kernel: CPU topo: Num. cores per package: 2 Dec 12 18:38:39.948442 kernel: CPU topo: Num. threads per package: 2 Dec 12 18:38:39.948459 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 12 18:38:39.948473 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 18:38:39.948488 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 12 18:38:39.948499 kernel: Booting paravirtualized kernel on KVM Dec 12 18:38:39.948511 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:38:39.948524 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 12 18:38:39.948539 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 12 18:38:39.948553 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 12 18:38:39.948567 kernel: pcpu-alloc: [0] 0 1 Dec 12 18:38:39.948585 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 12 18:38:39.948603 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:38:39.948618 kernel: random: crng init done Dec 12 18:38:39.948631 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 18:38:39.948646 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 12 18:38:39.948660 kernel: Fallback order for Node 0: 0 Dec 12 18:38:39.948673 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Dec 12 18:38:39.948687 kernel: Policy zone: DMA32 Dec 12 18:38:39.948706 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:38:39.948720 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 18:38:39.948735 kernel: Kernel/User page tables isolation: enabled Dec 12 18:38:39.948749 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:38:39.948764 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:38:39.948778 kernel: Dynamic Preempt: voluntary Dec 12 18:38:39.948792 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:38:39.948821 kernel: rcu: RCU event tracing is enabled. Dec 12 18:38:39.948836 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 18:38:39.948856 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:38:39.948871 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:38:39.948885 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:38:39.948899 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:38:39.948920 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 18:38:39.948934 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:38:39.948952 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:38:39.948964 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:38:39.948978 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 12 18:38:39.948997 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:38:39.949011 kernel: Console: colour VGA+ 80x25 Dec 12 18:38:39.950070 kernel: printk: legacy console [tty0] enabled Dec 12 18:38:39.950093 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:38:39.950103 kernel: ACPI: Core revision 20240827 Dec 12 18:38:39.950114 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 12 18:38:39.950137 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:38:39.950149 kernel: x2apic enabled Dec 12 18:38:39.950158 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:38:39.950168 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 12 18:38:39.950177 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Dec 12 18:38:39.950191 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Dec 12 18:38:39.950204 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 12 18:38:39.950213 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 12 18:38:39.950222 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:38:39.950231 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:38:39.950240 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:38:39.950252 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 12 18:38:39.950261 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 18:38:39.950270 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 18:38:39.950279 kernel: MDS: Mitigation: Clear CPU buffers Dec 12 18:38:39.950287 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 12 18:38:39.950296 kernel: active return thunk: its_return_thunk Dec 12 18:38:39.950305 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 12 18:38:39.950314 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:38:39.950326 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:38:39.950335 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:38:39.950343 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:38:39.950352 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 12 18:38:39.950361 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:38:39.950370 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:38:39.950379 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:38:39.950388 kernel: landlock: Up and running. Dec 12 18:38:39.950396 kernel: SELinux: Initializing. Dec 12 18:38:39.950408 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 12 18:38:39.950417 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 12 18:38:39.950426 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Dec 12 18:38:39.950435 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Dec 12 18:38:39.950444 kernel: signal: max sigframe size: 1776 Dec 12 18:38:39.950452 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:38:39.950462 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:38:39.950471 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:38:39.950479 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 12 18:38:39.950492 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:38:39.950503 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:38:39.950512 kernel: .... node #0, CPUs: #1 Dec 12 18:38:39.950520 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 18:38:39.950529 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Dec 12 18:38:39.950539 kernel: Memory: 1958716K/2096612K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 133332K reserved, 0K cma-reserved) Dec 12 18:38:39.950548 kernel: devtmpfs: initialized Dec 12 18:38:39.950557 kernel: x86/mm: Memory block size: 128MB Dec 12 18:38:39.950566 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:38:39.950578 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 18:38:39.950587 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:38:39.950595 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:38:39.950604 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:38:39.950613 kernel: audit: type=2000 audit(1765564716.349:1): state=initialized audit_enabled=0 res=1 Dec 12 18:38:39.950622 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:38:39.950630 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:38:39.950639 kernel: cpuidle: using governor menu Dec 12 18:38:39.950648 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:38:39.950659 kernel: dca service started, version 1.12.1 Dec 12 18:38:39.950668 kernel: PCI: Using configuration type 1 for base access Dec 12 18:38:39.950677 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:38:39.950686 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:38:39.950694 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:38:39.950703 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:38:39.950712 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:38:39.950721 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:38:39.950729 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 18:38:39.950741 kernel: ACPI: Interpreter enabled Dec 12 18:38:39.950750 kernel: ACPI: PM: (supports S0 S5) Dec 12 18:38:39.950759 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:38:39.950768 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:38:39.950777 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 18:38:39.950786 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 12 18:38:39.950794 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:38:39.951060 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:38:39.951168 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 12 18:38:39.951260 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 12 18:38:39.951272 kernel: acpiphp: Slot [3] registered Dec 12 18:38:39.951281 kernel: acpiphp: Slot [4] registered Dec 12 18:38:39.951290 kernel: acpiphp: Slot [5] registered Dec 12 18:38:39.951298 kernel: acpiphp: Slot [6] registered Dec 12 18:38:39.951307 kernel: acpiphp: Slot [7] registered Dec 12 18:38:39.951316 kernel: acpiphp: Slot [8] registered Dec 12 18:38:39.951324 kernel: acpiphp: Slot [9] registered Dec 12 18:38:39.951337 kernel: acpiphp: Slot [10] registered Dec 12 18:38:39.951346 kernel: acpiphp: Slot [11] registered Dec 12 18:38:39.951355 kernel: acpiphp: Slot [12] registered Dec 12 18:38:39.951363 kernel: acpiphp: Slot [13] registered Dec 12 18:38:39.951372 kernel: acpiphp: Slot [14] registered Dec 12 18:38:39.951381 kernel: acpiphp: Slot [15] registered Dec 12 18:38:39.951390 kernel: acpiphp: Slot [16] registered Dec 12 18:38:39.951399 kernel: acpiphp: Slot [17] registered Dec 12 18:38:39.951412 kernel: acpiphp: Slot [18] registered Dec 12 18:38:39.951429 kernel: acpiphp: Slot [19] registered Dec 12 18:38:39.951443 kernel: acpiphp: Slot [20] registered Dec 12 18:38:39.951458 kernel: acpiphp: Slot [21] registered Dec 12 18:38:39.951472 kernel: acpiphp: Slot [22] registered Dec 12 18:38:39.951486 kernel: acpiphp: Slot [23] registered Dec 12 18:38:39.951500 kernel: acpiphp: Slot [24] registered Dec 12 18:38:39.951516 kernel: acpiphp: Slot [25] registered Dec 12 18:38:39.951530 kernel: acpiphp: Slot [26] registered Dec 12 18:38:39.951545 kernel: acpiphp: Slot [27] registered Dec 12 18:38:39.951560 kernel: acpiphp: Slot [28] registered Dec 12 18:38:39.951578 kernel: acpiphp: Slot [29] registered Dec 12 18:38:39.951593 kernel: acpiphp: Slot [30] registered Dec 12 18:38:39.951609 kernel: acpiphp: Slot [31] registered Dec 12 18:38:39.951624 kernel: PCI host bridge to bus 0000:00 Dec 12 18:38:39.951820 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:38:39.951952 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:38:39.954277 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:38:39.954466 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 12 18:38:39.954597 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 12 18:38:39.954722 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:38:39.954922 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:38:39.955147 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:38:39.955314 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Dec 12 18:38:39.955467 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Dec 12 18:38:39.955606 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Dec 12 18:38:39.955841 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Dec 12 18:38:39.955993 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Dec 12 18:38:39.956250 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Dec 12 18:38:39.956428 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Dec 12 18:38:39.956607 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Dec 12 18:38:39.956793 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Dec 12 18:38:39.956941 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 12 18:38:39.957159 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 12 18:38:39.957323 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Dec 12 18:38:39.957467 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Dec 12 18:38:39.957608 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Dec 12 18:38:39.957755 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Dec 12 18:38:39.957896 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Dec 12 18:38:39.958062 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 18:38:39.958263 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:38:39.958409 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Dec 12 18:38:39.958551 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Dec 12 18:38:39.958690 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Dec 12 18:38:39.958874 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:38:39.961079 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Dec 12 18:38:39.961322 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Dec 12 18:38:39.961474 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 12 18:38:39.961658 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:38:39.961816 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Dec 12 18:38:39.961961 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Dec 12 18:38:39.962165 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 12 18:38:39.962289 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:38:39.962395 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Dec 12 18:38:39.962537 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Dec 12 18:38:39.962677 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Dec 12 18:38:39.962831 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:38:39.962984 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Dec 12 18:38:39.964314 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Dec 12 18:38:39.964479 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Dec 12 18:38:39.964665 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 18:38:39.964853 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Dec 12 18:38:39.964994 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Dec 12 18:38:39.965015 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:38:39.965237 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:38:39.965262 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:38:39.965277 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:38:39.965293 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 12 18:38:39.965309 kernel: iommu: Default domain type: Translated Dec 12 18:38:39.965324 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:38:39.965340 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:38:39.965355 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:38:39.965371 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 12 18:38:39.965386 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Dec 12 18:38:39.965571 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 12 18:38:39.965716 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 12 18:38:39.965859 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 18:38:39.965879 kernel: vgaarb: loaded Dec 12 18:38:39.965895 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 12 18:38:39.965911 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 12 18:38:39.965927 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:38:39.965942 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:38:39.965964 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:38:39.965979 kernel: pnp: PnP ACPI init Dec 12 18:38:39.965994 kernel: pnp: PnP ACPI: found 4 devices Dec 12 18:38:39.966010 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:38:39.967078 kernel: NET: Registered PF_INET protocol family Dec 12 18:38:39.967100 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:38:39.967116 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 12 18:38:39.967132 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:38:39.967148 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 12 18:38:39.967170 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 12 18:38:39.967186 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 12 18:38:39.967201 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 12 18:38:39.967217 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 12 18:38:39.967232 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:38:39.967248 kernel: NET: Registered PF_XDP protocol family Dec 12 18:38:39.967425 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:38:39.967557 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:38:39.967685 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:38:39.967778 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 12 18:38:39.967860 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 12 18:38:39.967965 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 12 18:38:39.968079 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 12 18:38:39.968093 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 12 18:38:39.968190 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 27223 usecs Dec 12 18:38:39.968203 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:38:39.968212 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 12 18:38:39.968226 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Dec 12 18:38:39.968235 kernel: Initialise system trusted keyrings Dec 12 18:38:39.968245 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 12 18:38:39.968254 kernel: Key type asymmetric registered Dec 12 18:38:39.968263 kernel: Asymmetric key parser 'x509' registered Dec 12 18:38:39.968272 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:38:39.968281 kernel: io scheduler mq-deadline registered Dec 12 18:38:39.968290 kernel: io scheduler kyber registered Dec 12 18:38:39.968300 kernel: io scheduler bfq registered Dec 12 18:38:39.968312 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:38:39.968321 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 12 18:38:39.968331 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 12 18:38:39.968340 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 12 18:38:39.968349 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:38:39.968358 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:38:39.968366 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:38:39.968376 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:38:39.968384 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:38:39.968568 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 12 18:38:39.968590 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 18:38:39.968698 kernel: rtc_cmos 00:03: registered as rtc0 Dec 12 18:38:39.968785 kernel: rtc_cmos 00:03: setting system clock to 2025-12-12T18:38:39 UTC (1765564719) Dec 12 18:38:39.968870 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 12 18:38:39.968881 kernel: intel_pstate: CPU model not supported Dec 12 18:38:39.968891 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:38:39.968905 kernel: Segment Routing with IPv6 Dec 12 18:38:39.968914 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:38:39.968923 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:38:39.968933 kernel: Key type dns_resolver registered Dec 12 18:38:39.968942 kernel: IPI shorthand broadcast: enabled Dec 12 18:38:39.968951 kernel: sched_clock: Marking stable (3355008959, 149913112)->(3628939303, -124017232) Dec 12 18:38:39.968960 kernel: registered taskstats version 1 Dec 12 18:38:39.968969 kernel: Loading compiled-in X.509 certificates Dec 12 18:38:39.968978 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 12 18:38:39.968990 kernel: Demotion targets for Node 0: null Dec 12 18:38:39.968998 kernel: Key type .fscrypt registered Dec 12 18:38:39.969007 kernel: Key type fscrypt-provisioning registered Dec 12 18:38:39.970115 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:38:39.970141 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:38:39.970158 kernel: ima: No architecture policies found Dec 12 18:38:39.970174 kernel: clk: Disabling unused clocks Dec 12 18:38:39.970190 kernel: Warning: unable to open an initial console. Dec 12 18:38:39.970207 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 12 18:38:39.970227 kernel: Write protecting the kernel read-only data: 40960k Dec 12 18:38:39.970248 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 12 18:38:39.970264 kernel: Run /init as init process Dec 12 18:38:39.970281 kernel: with arguments: Dec 12 18:38:39.970297 kernel: /init Dec 12 18:38:39.970314 kernel: with environment: Dec 12 18:38:39.970329 kernel: HOME=/ Dec 12 18:38:39.970346 kernel: TERM=linux Dec 12 18:38:39.970363 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:38:39.970390 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:38:39.970407 systemd[1]: Detected virtualization kvm. Dec 12 18:38:39.970421 systemd[1]: Detected architecture x86-64. Dec 12 18:38:39.970437 systemd[1]: Running in initrd. Dec 12 18:38:39.970453 systemd[1]: No hostname configured, using default hostname. Dec 12 18:38:39.970470 systemd[1]: Hostname set to . Dec 12 18:38:39.970487 systemd[1]: Initializing machine ID from VM UUID. Dec 12 18:38:39.970513 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:38:39.970530 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:38:39.970547 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:38:39.970565 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:38:39.970583 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:38:39.970600 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:38:39.970622 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:38:39.970641 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 18:38:39.970659 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 18:38:39.970676 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:38:39.970693 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:38:39.970711 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:38:39.970731 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:38:39.970748 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:38:39.970765 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:38:39.970784 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:38:39.970800 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:38:39.970817 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:38:39.970835 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:38:39.970853 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:38:39.970870 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:38:39.970892 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:38:39.970910 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:38:39.970926 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:38:39.970943 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:38:39.970960 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:38:39.970979 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:38:39.970996 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:38:39.971015 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:38:39.972083 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:38:39.972108 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:38:39.972126 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:38:39.972207 systemd-journald[191]: Collecting audit messages is disabled. Dec 12 18:38:39.972254 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:38:39.972272 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:38:39.972291 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:38:39.972311 systemd-journald[191]: Journal started Dec 12 18:38:39.972352 systemd-journald[191]: Runtime Journal (/run/log/journal/a5624fef848449d99c62f93c68a01d1e) is 4.9M, max 39.2M, 34.3M free. Dec 12 18:38:39.945970 systemd-modules-load[193]: Inserted module 'overlay' Dec 12 18:38:39.980786 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:38:39.995176 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:38:39.996262 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:38:40.055944 kernel: Bridge firewalling registered Dec 12 18:38:40.001794 systemd-modules-load[193]: Inserted module 'br_netfilter' Dec 12 18:38:40.055981 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:38:40.066313 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:40.067180 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:38:40.073270 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:38:40.074601 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:38:40.077508 systemd-tmpfiles[207]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:38:40.081183 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:38:40.087439 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:38:40.104484 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:38:40.112423 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:38:40.115269 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:38:40.128783 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:38:40.130462 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:38:40.155790 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:38:40.177887 systemd-resolved[224]: Positive Trust Anchors: Dec 12 18:38:40.177901 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:38:40.177938 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:38:40.181213 systemd-resolved[224]: Defaulting to hostname 'linux'. Dec 12 18:38:40.183242 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:38:40.183772 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:38:40.285115 kernel: SCSI subsystem initialized Dec 12 18:38:40.299085 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:38:40.315085 kernel: iscsi: registered transport (tcp) Dec 12 18:38:40.346555 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:38:40.346649 kernel: QLogic iSCSI HBA Driver Dec 12 18:38:40.374365 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:38:40.403403 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:38:40.407122 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:38:40.472837 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:38:40.476480 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:38:40.539108 kernel: raid6: avx2x4 gen() 19567 MB/s Dec 12 18:38:40.557091 kernel: raid6: avx2x2 gen() 16905 MB/s Dec 12 18:38:40.574316 kernel: raid6: avx2x1 gen() 20135 MB/s Dec 12 18:38:40.574428 kernel: raid6: using algorithm avx2x1 gen() 20135 MB/s Dec 12 18:38:40.594117 kernel: raid6: .... xor() 13961 MB/s, rmw enabled Dec 12 18:38:40.594235 kernel: raid6: using avx2x2 recovery algorithm Dec 12 18:38:40.616068 kernel: xor: automatically using best checksumming function avx Dec 12 18:38:40.793083 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:38:40.803889 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:38:40.809072 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:38:40.840626 systemd-udevd[441]: Using default interface naming scheme 'v255'. Dec 12 18:38:40.848211 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:38:40.852297 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:38:40.890700 dracut-pre-trigger[452]: rd.md=0: removing MD RAID activation Dec 12 18:38:40.928765 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:38:40.930772 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:38:41.008139 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:38:41.011338 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:38:41.103259 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Dec 12 18:38:41.115098 kernel: scsi host0: Virtio SCSI HBA Dec 12 18:38:41.115194 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Dec 12 18:38:41.133189 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 12 18:38:41.137053 kernel: libata version 3.00 loaded. Dec 12 18:38:41.141068 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:38:41.162640 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:38:41.162721 kernel: GPT:9289727 != 125829119 Dec 12 18:38:41.162735 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:38:41.162747 kernel: GPT:9289727 != 125829119 Dec 12 18:38:41.162759 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:38:41.162782 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:38:41.164074 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 12 18:38:41.179085 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 12 18:38:41.182079 kernel: scsi host1: ata_piix Dec 12 18:38:41.189819 kernel: AES CTR mode by8 optimization enabled Dec 12 18:38:41.193105 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Dec 12 18:38:41.196479 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Dec 12 18:38:41.223226 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:38:41.224184 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:41.230490 kernel: scsi host2: ata_piix Dec 12 18:38:41.231983 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Dec 12 18:38:41.232006 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Dec 12 18:38:41.231243 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:38:41.236087 kernel: ACPI: bus type USB registered Dec 12 18:38:41.235387 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:38:41.237168 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:38:41.243400 kernel: usbcore: registered new interface driver usbfs Dec 12 18:38:41.250232 kernel: usbcore: registered new interface driver hub Dec 12 18:38:41.257056 kernel: usbcore: registered new device driver usb Dec 12 18:38:41.325755 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 18:38:41.359258 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:41.368961 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 18:38:41.388321 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 18:38:41.397153 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 12 18:38:41.399376 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 18:38:41.401622 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:38:41.417068 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 12 18:38:41.419875 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 12 18:38:41.420171 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 12 18:38:41.422106 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Dec 12 18:38:41.424412 kernel: hub 1-0:1.0: USB hub found Dec 12 18:38:41.424751 kernel: hub 1-0:1.0: 2 ports detected Dec 12 18:38:41.427353 disk-uuid[595]: Primary Header is updated. Dec 12 18:38:41.427353 disk-uuid[595]: Secondary Entries is updated. Dec 12 18:38:41.427353 disk-uuid[595]: Secondary Header is updated. Dec 12 18:38:41.436119 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:38:41.446124 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:38:41.593242 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:38:41.611488 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:38:41.612094 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:38:41.613003 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:38:41.615531 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:38:41.651729 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:38:42.446367 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:38:42.447484 disk-uuid[596]: The operation has completed successfully. Dec 12 18:38:42.510425 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:38:42.510611 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:38:42.559058 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 18:38:42.578559 sh[620]: Success Dec 12 18:38:42.605816 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:38:42.605963 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:38:42.609078 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:38:42.623076 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Dec 12 18:38:42.679331 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:38:42.684212 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 18:38:42.706004 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 18:38:42.718621 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (632) Dec 12 18:38:42.718712 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 12 18:38:42.722152 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:38:42.729096 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:38:42.729222 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:38:42.733602 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 18:38:42.734505 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:38:42.735241 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:38:42.736338 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:38:42.741900 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:38:42.774140 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (664) Dec 12 18:38:42.777111 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:38:42.777198 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:38:42.784191 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:38:42.784297 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:38:42.793099 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:38:42.796791 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:38:42.800301 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:38:42.933855 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:38:42.940762 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:38:43.035595 ignition[709]: Ignition 2.22.0 Dec 12 18:38:43.036579 ignition[709]: Stage: fetch-offline Dec 12 18:38:43.035757 systemd-networkd[807]: lo: Link UP Dec 12 18:38:43.036676 ignition[709]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:43.035762 systemd-networkd[807]: lo: Gained carrier Dec 12 18:38:43.036698 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:38:43.038720 systemd-networkd[807]: Enumeration completed Dec 12 18:38:43.036889 ignition[709]: parsed url from cmdline: "" Dec 12 18:38:43.038926 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:38:43.036895 ignition[709]: no config URL provided Dec 12 18:38:43.039293 systemd-networkd[807]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 12 18:38:43.036905 ignition[709]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:38:43.039298 systemd-networkd[807]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Dec 12 18:38:43.036919 ignition[709]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:38:43.040810 systemd-networkd[807]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:38:43.036929 ignition[709]: failed to fetch config: resource requires networking Dec 12 18:38:43.040815 systemd-networkd[807]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:38:43.037220 ignition[709]: Ignition finished successfully Dec 12 18:38:43.042233 systemd-networkd[807]: eth0: Link UP Dec 12 18:38:43.042312 systemd[1]: Reached target network.target - Network. Dec 12 18:38:43.042446 systemd-networkd[807]: eth1: Link UP Dec 12 18:38:43.042610 systemd-networkd[807]: eth0: Gained carrier Dec 12 18:38:43.042629 systemd-networkd[807]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 12 18:38:43.045400 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:38:43.048784 systemd-networkd[807]: eth1: Gained carrier Dec 12 18:38:43.048813 systemd-networkd[807]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:38:43.053957 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 18:38:43.059225 systemd-networkd[807]: eth0: DHCPv4 address 143.198.226.225/20, gateway 143.198.224.1 acquired from 169.254.169.253 Dec 12 18:38:43.063703 systemd-networkd[807]: eth1: DHCPv4 address 10.124.0.31/20 acquired from 169.254.169.253 Dec 12 18:38:43.100429 ignition[811]: Ignition 2.22.0 Dec 12 18:38:43.100444 ignition[811]: Stage: fetch Dec 12 18:38:43.101056 ignition[811]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:43.101072 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:38:43.101199 ignition[811]: parsed url from cmdline: "" Dec 12 18:38:43.101203 ignition[811]: no config URL provided Dec 12 18:38:43.101209 ignition[811]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:38:43.101220 ignition[811]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:38:43.101269 ignition[811]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Dec 12 18:38:43.117360 ignition[811]: GET result: OK Dec 12 18:38:43.118416 ignition[811]: parsing config with SHA512: a697de2dd5807bd5cfb2573ccd2631a8a81c04d7e6cbc119ad98f99d5958153f28acf11dc4532756f3217b41834403b6d02c0e38c7d55e7bcc3a55c0170e0bf3 Dec 12 18:38:43.129318 unknown[811]: fetched base config from "system" Dec 12 18:38:43.130545 ignition[811]: fetch: fetch complete Dec 12 18:38:43.129348 unknown[811]: fetched base config from "system" Dec 12 18:38:43.130557 ignition[811]: fetch: fetch passed Dec 12 18:38:43.129360 unknown[811]: fetched user config from "digitalocean" Dec 12 18:38:43.130669 ignition[811]: Ignition finished successfully Dec 12 18:38:43.136954 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 18:38:43.139572 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:38:43.184954 ignition[818]: Ignition 2.22.0 Dec 12 18:38:43.184971 ignition[818]: Stage: kargs Dec 12 18:38:43.185218 ignition[818]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:43.185229 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:38:43.186430 ignition[818]: kargs: kargs passed Dec 12 18:38:43.189726 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:38:43.186506 ignition[818]: Ignition finished successfully Dec 12 18:38:43.192503 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:38:43.234886 ignition[825]: Ignition 2.22.0 Dec 12 18:38:43.234902 ignition[825]: Stage: disks Dec 12 18:38:43.236496 ignition[825]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:43.236514 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:38:43.238372 ignition[825]: disks: disks passed Dec 12 18:38:43.238445 ignition[825]: Ignition finished successfully Dec 12 18:38:43.240577 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:38:43.242467 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:38:43.243221 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:38:43.244164 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:38:43.245175 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:38:43.245972 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:38:43.248291 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:38:43.299233 systemd-fsck[834]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 18:38:43.304129 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:38:43.306296 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:38:43.444094 kernel: EXT4-fs (vda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 12 18:38:43.445963 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:38:43.447806 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:38:43.451470 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:38:43.454782 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:38:43.463439 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Dec 12 18:38:43.468229 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 12 18:38:43.471457 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:38:43.472467 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:38:43.475554 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:38:43.483087 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (842) Dec 12 18:38:43.486213 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:38:43.489100 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:38:43.489375 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:38:43.502442 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:38:43.502552 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:38:43.505366 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:38:43.571429 coreos-metadata[844]: Dec 12 18:38:43.571 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 12 18:38:43.579298 coreos-metadata[845]: Dec 12 18:38:43.579 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 12 18:38:43.583068 initrd-setup-root[873]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:38:43.584116 coreos-metadata[844]: Dec 12 18:38:43.583 INFO Fetch successful Dec 12 18:38:43.592339 coreos-metadata[845]: Dec 12 18:38:43.591 INFO Fetch successful Dec 12 18:38:43.596149 initrd-setup-root[880]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:38:43.597663 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Dec 12 18:38:43.597853 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Dec 12 18:38:43.605216 initrd-setup-root[887]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:38:43.608145 coreos-metadata[845]: Dec 12 18:38:43.608 INFO wrote hostname ci-4459.2.2-f-e155308a0b to /sysroot/etc/hostname Dec 12 18:38:43.610454 initrd-setup-root[895]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:38:43.611896 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 12 18:38:43.747902 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:38:43.750245 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:38:43.752191 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:38:43.785178 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:38:43.787171 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:38:43.802425 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:38:43.840416 ignition[964]: INFO : Ignition 2.22.0 Dec 12 18:38:43.840416 ignition[964]: INFO : Stage: mount Dec 12 18:38:43.842008 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:43.842008 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:38:43.844919 ignition[964]: INFO : mount: mount passed Dec 12 18:38:43.844919 ignition[964]: INFO : Ignition finished successfully Dec 12 18:38:43.845907 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:38:43.849009 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:38:43.871824 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:38:43.899212 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (976) Dec 12 18:38:43.899298 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:38:43.902024 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:38:43.907568 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:38:43.907690 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:38:43.911071 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:38:43.958063 ignition[993]: INFO : Ignition 2.22.0 Dec 12 18:38:43.958063 ignition[993]: INFO : Stage: files Dec 12 18:38:43.959888 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:43.959888 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:38:43.961540 ignition[993]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:38:43.962498 ignition[993]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:38:43.962498 ignition[993]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:38:43.964713 ignition[993]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:38:43.965509 ignition[993]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:38:43.965509 ignition[993]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:38:43.965414 unknown[993]: wrote ssh authorized keys file for user: core Dec 12 18:38:43.968193 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 12 18:38:43.968193 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 12 18:38:44.005688 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:38:44.145367 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 12 18:38:44.145367 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:38:44.147322 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:38:44.147322 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:38:44.147322 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:38:44.147322 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:38:44.147322 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:38:44.147322 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:38:44.147322 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:38:44.160918 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:38:44.160918 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:38:44.160918 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:38:44.160918 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:38:44.160918 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:38:44.160918 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 12 18:38:44.157267 systemd-networkd[807]: eth1: Gained IPv6LL Dec 12 18:38:44.569116 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 18:38:44.670231 systemd-networkd[807]: eth0: Gained IPv6LL Dec 12 18:38:44.840133 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:38:44.840133 ignition[993]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 18:38:44.841855 ignition[993]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:38:44.842758 ignition[993]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:38:44.842758 ignition[993]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 18:38:44.842758 ignition[993]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:38:44.842758 ignition[993]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:38:44.842758 ignition[993]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:38:44.842758 ignition[993]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:38:44.842758 ignition[993]: INFO : files: files passed Dec 12 18:38:44.842758 ignition[993]: INFO : Ignition finished successfully Dec 12 18:38:44.844615 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:38:44.848200 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:38:44.850799 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:38:44.861159 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:38:44.861927 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:38:44.874252 initrd-setup-root-after-ignition[1023]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:38:44.874252 initrd-setup-root-after-ignition[1023]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:38:44.877179 initrd-setup-root-after-ignition[1027]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:38:44.879604 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:38:44.880438 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:38:44.882274 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:38:44.943295 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:38:44.943470 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:38:44.945427 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:38:44.945946 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:38:44.947125 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:38:44.948149 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:38:44.976748 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:38:44.981264 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:38:45.002172 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:38:45.003510 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:38:45.004166 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:38:45.004649 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:38:45.004787 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:38:45.006776 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:38:45.007351 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:38:45.008341 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:38:45.009200 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:38:45.010156 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:38:45.011161 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:38:45.012481 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:38:45.013477 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:38:45.014747 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:38:45.015854 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:38:45.016839 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:38:45.017683 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:38:45.017874 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:38:45.019214 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:38:45.020394 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:38:45.021203 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:38:45.021364 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:38:45.022287 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:38:45.022488 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:38:45.023794 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:38:45.023982 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:38:45.025013 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:38:45.025170 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:38:45.025886 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 12 18:38:45.026019 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 12 18:38:45.027952 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:38:45.032391 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:38:45.033720 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:38:45.035156 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:38:45.037001 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:38:45.037200 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:38:45.058898 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:38:45.059822 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:38:45.069933 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:38:45.074010 ignition[1047]: INFO : Ignition 2.22.0 Dec 12 18:38:45.074010 ignition[1047]: INFO : Stage: umount Dec 12 18:38:45.076210 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:45.076210 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:38:45.076210 ignition[1047]: INFO : umount: umount passed Dec 12 18:38:45.076210 ignition[1047]: INFO : Ignition finished successfully Dec 12 18:38:45.077422 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:38:45.078157 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:38:45.078936 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:38:45.079080 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:38:45.080537 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:38:45.080669 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:38:45.081818 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:38:45.081870 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:38:45.082705 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 18:38:45.082749 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 18:38:45.083499 systemd[1]: Stopped target network.target - Network. Dec 12 18:38:45.084264 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:38:45.084310 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:38:45.085223 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:38:45.086011 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:38:45.090207 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:38:45.090950 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:38:45.091879 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:38:45.092735 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:38:45.092785 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:38:45.093536 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:38:45.093575 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:38:45.094433 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:38:45.094499 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:38:45.095579 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:38:45.095625 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:38:45.096582 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:38:45.096635 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:38:45.097703 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:38:45.098695 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:38:45.104857 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:38:45.105566 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:38:45.109953 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 18:38:45.110751 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:38:45.110872 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:38:45.114598 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:38:45.114899 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:38:45.115051 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:38:45.117256 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 18:38:45.118399 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:38:45.119560 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:38:45.119610 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:38:45.121702 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:38:45.122291 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:38:45.122348 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:38:45.122846 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:38:45.122885 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:38:45.124199 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:38:45.124262 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:38:45.124881 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:38:45.128824 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 18:38:45.146698 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:38:45.146906 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:38:45.149866 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:38:45.149983 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:38:45.152534 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:38:45.152584 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:38:45.153437 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:38:45.153496 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:38:45.156259 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:38:45.156318 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:38:45.157228 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:38:45.157290 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:38:45.161184 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:38:45.164584 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:38:45.164705 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:38:45.167880 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:38:45.167971 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:38:45.169741 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 18:38:45.169809 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:38:45.170883 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:38:45.170948 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:38:45.172932 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:38:45.173004 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:45.176366 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:38:45.178362 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:38:45.188135 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:38:45.188346 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:38:45.189758 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:38:45.191750 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:38:45.231084 systemd[1]: Switching root. Dec 12 18:38:45.276836 systemd-journald[191]: Journal stopped Dec 12 18:38:46.591709 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Dec 12 18:38:46.591914 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:38:46.591938 kernel: SELinux: policy capability open_perms=1 Dec 12 18:38:46.591950 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:38:46.592072 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:38:46.592086 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:38:46.592103 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:38:46.592114 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:38:46.592126 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:38:46.592145 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:38:46.592157 kernel: audit: type=1403 audit(1765564725.459:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 18:38:46.592180 systemd[1]: Successfully loaded SELinux policy in 72.353ms. Dec 12 18:38:46.593981 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.229ms. Dec 12 18:38:46.594598 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:38:46.594634 systemd[1]: Detected virtualization kvm. Dec 12 18:38:46.594647 systemd[1]: Detected architecture x86-64. Dec 12 18:38:46.594659 systemd[1]: Detected first boot. Dec 12 18:38:46.594673 systemd[1]: Hostname set to . Dec 12 18:38:46.594686 systemd[1]: Initializing machine ID from VM UUID. Dec 12 18:38:46.594712 zram_generator::config[1091]: No configuration found. Dec 12 18:38:46.594727 kernel: Guest personality initialized and is inactive Dec 12 18:38:46.594740 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:38:46.594753 kernel: Initialized host personality Dec 12 18:38:46.594764 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:38:46.594776 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:38:46.594791 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 18:38:46.602713 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:38:46.602760 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:38:46.602774 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:38:46.602788 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:38:46.602801 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:38:46.602814 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:38:46.602826 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:38:46.602839 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:38:46.602852 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:38:46.602865 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:38:46.602881 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:38:46.602893 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:38:46.602907 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:38:46.602920 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:38:46.602934 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:38:46.602948 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:38:46.602964 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:38:46.602985 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:38:46.602999 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:38:46.603011 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:38:46.616085 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:38:46.616154 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:38:46.616168 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:38:46.616181 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:38:46.616194 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:38:46.616215 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:38:46.616228 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:38:46.616241 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:38:46.616256 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:38:46.616268 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:38:46.616281 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:38:46.616294 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:38:46.616307 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:38:46.616320 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:38:46.616332 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:38:46.616347 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:38:46.616359 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:38:46.616373 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:38:46.616395 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:46.616413 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:38:46.616431 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:38:46.616450 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:38:46.616471 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:38:46.616494 systemd[1]: Reached target machines.target - Containers. Dec 12 18:38:46.616512 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:38:46.616533 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:38:46.616547 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:38:46.616559 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:38:46.616573 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:38:46.616586 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:38:46.616598 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:38:46.616615 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:38:46.616629 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:38:46.616642 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:38:46.616655 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:38:46.616667 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:38:46.616681 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:38:46.616693 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:38:46.616706 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:38:46.616718 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:38:46.616733 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:38:46.616750 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:38:46.616763 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:38:46.616775 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:38:46.616788 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:38:46.616803 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 18:38:46.616818 systemd[1]: Stopped verity-setup.service. Dec 12 18:38:46.616831 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:46.616843 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:38:46.616856 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:38:46.616871 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:38:46.616883 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:38:46.616895 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:38:46.616908 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:38:46.616931 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:38:46.616955 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:38:46.616969 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:38:46.616982 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:38:46.616994 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:38:46.617011 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:38:46.617024 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:38:46.619172 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:38:46.619194 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:38:46.619209 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:38:46.619222 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:38:46.619236 kernel: loop: module loaded Dec 12 18:38:46.619250 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:38:46.619272 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:38:46.619537 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:38:46.619556 kernel: fuse: init (API version 7.41) Dec 12 18:38:46.619569 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:38:46.619583 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:38:46.619601 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:38:46.619615 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:38:46.619628 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:38:46.619651 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:38:46.619667 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:38:46.619680 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:38:46.619693 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:38:46.619705 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:38:46.619719 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:38:46.619732 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:38:46.619744 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:38:46.619807 systemd-journald[1165]: Collecting audit messages is disabled. Dec 12 18:38:46.629054 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:38:46.629106 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:38:46.629128 kernel: ACPI: bus type drm_connector registered Dec 12 18:38:46.629150 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:38:46.629171 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:38:46.629192 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:38:46.629213 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:38:46.629232 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:38:46.629252 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:38:46.629276 systemd-journald[1165]: Journal started Dec 12 18:38:46.629306 systemd-journald[1165]: Runtime Journal (/run/log/journal/a5624fef848449d99c62f93c68a01d1e) is 4.9M, max 39.2M, 34.3M free. Dec 12 18:38:46.151548 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:38:46.177040 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 18:38:46.177627 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:38:46.642191 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:38:46.642269 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:38:46.670543 kernel: loop0: detected capacity change from 0 to 224512 Dec 12 18:38:46.680834 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:38:46.682832 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:38:46.685639 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:38:46.685806 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Dec 12 18:38:46.685829 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Dec 12 18:38:46.694394 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:38:46.707118 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:38:46.712440 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:38:46.724095 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:38:46.751344 systemd-journald[1165]: Time spent on flushing to /var/log/journal/a5624fef848449d99c62f93c68a01d1e is 49.731ms for 1020 entries. Dec 12 18:38:46.751344 systemd-journald[1165]: System Journal (/var/log/journal/a5624fef848449d99c62f93c68a01d1e) is 8M, max 195.6M, 187.6M free. Dec 12 18:38:46.814994 systemd-journald[1165]: Received client request to flush runtime journal. Dec 12 18:38:46.815896 kernel: loop1: detected capacity change from 0 to 8 Dec 12 18:38:46.815985 kernel: loop2: detected capacity change from 0 to 110984 Dec 12 18:38:46.772959 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:38:46.811111 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:38:46.828420 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:38:46.830622 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:38:46.867657 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:38:46.883076 kernel: loop3: detected capacity change from 0 to 128560 Dec 12 18:38:46.940147 kernel: loop4: detected capacity change from 0 to 224512 Dec 12 18:38:46.968166 kernel: loop5: detected capacity change from 0 to 8 Dec 12 18:38:46.973437 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Dec 12 18:38:46.975068 kernel: loop6: detected capacity change from 0 to 110984 Dec 12 18:38:46.974008 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Dec 12 18:38:46.993088 kernel: loop7: detected capacity change from 0 to 128560 Dec 12 18:38:46.996355 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:38:47.032777 (sd-merge)[1243]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Dec 12 18:38:47.035013 (sd-merge)[1243]: Merged extensions into '/usr'. Dec 12 18:38:47.048572 systemd[1]: Reload requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:38:47.048784 systemd[1]: Reloading... Dec 12 18:38:47.319082 zram_generator::config[1270]: No configuration found. Dec 12 18:38:47.354905 ldconfig[1185]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:38:47.555362 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:38:47.555811 systemd[1]: Reloading finished in 503 ms. Dec 12 18:38:47.570846 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:38:47.576036 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:38:47.585124 systemd[1]: Starting ensure-sysext.service... Dec 12 18:38:47.590384 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:38:47.632206 systemd[1]: Reload requested from client PID 1314 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:38:47.632231 systemd[1]: Reloading... Dec 12 18:38:47.640578 systemd-tmpfiles[1315]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:38:47.640938 systemd-tmpfiles[1315]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:38:47.641415 systemd-tmpfiles[1315]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:38:47.641864 systemd-tmpfiles[1315]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 18:38:47.643391 systemd-tmpfiles[1315]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 18:38:47.643858 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Dec 12 18:38:47.643948 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Dec 12 18:38:47.651931 systemd-tmpfiles[1315]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:38:47.651952 systemd-tmpfiles[1315]: Skipping /boot Dec 12 18:38:47.676160 systemd-tmpfiles[1315]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:38:47.676178 systemd-tmpfiles[1315]: Skipping /boot Dec 12 18:38:47.749086 zram_generator::config[1338]: No configuration found. Dec 12 18:38:47.985866 systemd[1]: Reloading finished in 353 ms. Dec 12 18:38:48.000826 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:38:48.014257 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:38:48.028361 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:38:48.033390 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:38:48.035989 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:38:48.039425 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:38:48.044905 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:38:48.051800 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:38:48.055573 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:48.055888 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:38:48.062085 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:38:48.072642 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:38:48.076537 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:38:48.077257 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:38:48.077409 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:38:48.077522 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:48.086350 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:48.086577 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:38:48.086833 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:38:48.086960 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:38:48.087070 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:48.091266 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:48.091519 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:38:48.096732 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:38:48.098405 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:38:48.098553 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:38:48.098697 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:48.103995 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:38:48.105490 systemd[1]: Finished ensure-sysext.service. Dec 12 18:38:48.116322 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 18:38:48.149502 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:38:48.155955 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:38:48.158383 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:38:48.158631 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:38:48.162676 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:38:48.167716 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:38:48.169107 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:38:48.173126 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:38:48.184318 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:38:48.184876 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:38:48.186449 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:38:48.189105 systemd-udevd[1391]: Using default interface naming scheme 'v255'. Dec 12 18:38:48.191593 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:38:48.197477 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:38:48.238473 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:38:48.254921 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:38:48.256322 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:38:48.256698 augenrules[1427]: No rules Dec 12 18:38:48.259982 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:38:48.261235 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:38:48.270244 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:38:48.278315 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:38:48.282933 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:38:48.459696 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:38:48.480973 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Dec 12 18:38:48.485323 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Dec 12 18:38:48.487130 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:48.487376 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:38:48.491251 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:38:48.494829 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:38:48.498733 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:38:48.499510 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:38:48.499568 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:38:48.499612 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:38:48.499637 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:48.525664 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:38:48.530597 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:38:48.539850 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:38:48.541621 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:38:48.543679 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:38:48.544057 kernel: ISO 9660 Extensions: RRIP_1991A Dec 12 18:38:48.550525 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:38:48.550858 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:38:48.551807 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:38:48.564427 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Dec 12 18:38:48.676200 systemd-networkd[1437]: lo: Link UP Dec 12 18:38:48.676215 systemd-networkd[1437]: lo: Gained carrier Dec 12 18:38:48.678888 systemd-resolved[1390]: Positive Trust Anchors: Dec 12 18:38:48.680081 systemd-resolved[1390]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:38:48.680136 systemd-resolved[1390]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:38:48.682879 systemd-networkd[1437]: Enumeration completed Dec 12 18:38:48.683091 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:38:48.684538 systemd-networkd[1437]: eth1: Configuring with /run/systemd/network/10-f2:be:51:ea:6a:de.network. Dec 12 18:38:48.688346 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:38:48.689269 systemd-resolved[1390]: Using system hostname 'ci-4459.2.2-f-e155308a0b'. Dec 12 18:38:48.689282 systemd-networkd[1437]: eth1: Link UP Dec 12 18:38:48.689496 systemd-networkd[1437]: eth1: Gained carrier Dec 12 18:38:48.692644 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:38:48.694284 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:38:48.694989 systemd[1]: Reached target network.target - Network. Dec 12 18:38:48.696605 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:38:48.704579 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 18:38:48.711356 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:38:48.747350 systemd-networkd[1437]: eth0: Configuring with /run/systemd/network/10-a6:e9:ea:f8:b1:3e.network. Dec 12 18:38:48.750610 systemd-networkd[1437]: eth0: Link UP Dec 12 18:38:48.752369 systemd-networkd[1437]: eth0: Gained carrier Dec 12 18:38:48.765466 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:38:48.778173 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:38:48.782331 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 18:38:48.783372 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:38:48.784223 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:38:48.786087 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:38:48.786752 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:38:48.787484 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:38:48.788159 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:38:48.788213 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:38:48.788762 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:38:48.789608 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:38:48.790371 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:38:48.791299 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:38:48.793288 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:38:48.796561 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:38:48.797555 systemd-timesyncd[1403]: Contacted time server 155.248.196.28:123 (0.flatcar.pool.ntp.org). Dec 12 18:38:48.797611 systemd-timesyncd[1403]: Initial clock synchronization to Fri 2025-12-12 18:38:49.154822 UTC. Dec 12 18:38:48.800769 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:38:48.801787 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:38:48.803587 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:38:48.812706 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:38:48.815295 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:38:48.818070 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:38:48.820744 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:38:48.821761 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:38:48.823479 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:38:48.823528 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:38:48.825603 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:38:48.829144 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 18:38:48.833136 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:38:48.837401 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:38:48.842278 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:38:48.846353 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:38:48.846997 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:38:48.852863 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:38:48.858346 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:38:48.870405 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:38:48.877353 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:38:48.884397 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:38:48.891423 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:38:48.894698 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:38:48.895546 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:38:48.903047 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:38:48.908656 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:38:48.920383 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:38:48.933758 oslogin_cache_refresh[1503]: Refreshing passwd entry cache Dec 12 18:38:48.934462 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Refreshing passwd entry cache Dec 12 18:38:48.939094 jq[1501]: false Dec 12 18:38:48.939479 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:38:48.939829 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:38:48.958285 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Failure getting users, quitting Dec 12 18:38:48.958406 coreos-metadata[1498]: Dec 12 18:38:48.955 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 12 18:38:48.959723 oslogin_cache_refresh[1503]: Failure getting users, quitting Dec 12 18:38:48.964578 extend-filesystems[1502]: Found /dev/vda6 Dec 12 18:38:48.970241 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:38:48.970288 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:38:48.970288 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Refreshing group entry cache Dec 12 18:38:48.970288 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Failure getting groups, quitting Dec 12 18:38:48.970288 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:38:48.961573 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:38:48.959856 oslogin_cache_refresh[1503]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:38:48.959919 oslogin_cache_refresh[1503]: Refreshing group entry cache Dec 12 18:38:48.963579 oslogin_cache_refresh[1503]: Failure getting groups, quitting Dec 12 18:38:48.963596 oslogin_cache_refresh[1503]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:38:48.976199 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 12 18:38:48.976303 coreos-metadata[1498]: Dec 12 18:38:48.973 INFO Fetch successful Dec 12 18:38:48.979176 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:38:48.990241 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:38:48.980946 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:38:48.981438 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:38:49.007674 extend-filesystems[1502]: Found /dev/vda9 Dec 12 18:38:49.003660 (ntainerd)[1527]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 18:38:49.021140 jq[1512]: true Dec 12 18:38:49.048127 extend-filesystems[1502]: Checking size of /dev/vda9 Dec 12 18:38:49.074481 jq[1536]: true Dec 12 18:38:49.086390 tar[1519]: linux-amd64/LICENSE Dec 12 18:38:49.086390 tar[1519]: linux-amd64/helm Dec 12 18:38:49.084420 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:38:49.084833 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:38:49.093435 dbus-daemon[1499]: [system] SELinux support is enabled Dec 12 18:38:49.094124 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:38:49.100325 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:38:49.100388 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:38:49.101225 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:38:49.101359 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Dec 12 18:38:49.101388 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:38:49.109132 update_engine[1511]: I20251212 18:38:49.106619 1511 main.cc:92] Flatcar Update Engine starting Dec 12 18:38:49.130722 extend-filesystems[1502]: Resized partition /dev/vda9 Dec 12 18:38:49.131223 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 18:38:49.133959 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:38:49.140500 extend-filesystems[1551]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:38:49.147777 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Dec 12 18:38:49.140540 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:38:49.147909 update_engine[1511]: I20251212 18:38:49.142750 1511 update_check_scheduler.cc:74] Next update check in 3m28s Dec 12 18:38:49.152212 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:38:49.331389 bash[1570]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:38:49.345122 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 12 18:38:49.346953 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:38:49.370080 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 12 18:38:49.373774 systemd[1]: Starting sshkeys.service... Dec 12 18:38:49.400836 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 12 18:38:49.406839 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:38:49.483102 extend-filesystems[1551]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 18:38:49.483102 extend-filesystems[1551]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 12 18:38:49.483102 extend-filesystems[1551]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 12 18:38:49.409904 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:38:49.490641 extend-filesystems[1502]: Resized filesystem in /dev/vda9 Dec 12 18:38:49.420530 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 18:38:49.424780 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 18:38:49.431350 systemd-logind[1510]: New seat seat0. Dec 12 18:38:49.435048 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:38:49.502574 coreos-metadata[1577]: Dec 12 18:38:49.501 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 12 18:38:49.502962 locksmithd[1553]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:38:49.519800 coreos-metadata[1577]: Dec 12 18:38:49.516 INFO Fetch successful Dec 12 18:38:49.540158 unknown[1577]: wrote ssh authorized keys file for user: core Dec 12 18:38:49.624315 update-ssh-keys[1590]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:38:49.622487 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 18:38:49.629134 systemd[1]: Finished sshkeys.service. Dec 12 18:38:49.686675 containerd[1527]: time="2025-12-12T18:38:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:38:49.689544 containerd[1527]: time="2025-12-12T18:38:49.687504000Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 18:38:49.724099 containerd[1527]: time="2025-12-12T18:38:49.718942344Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.555µs" Dec 12 18:38:49.724099 containerd[1527]: time="2025-12-12T18:38:49.720252593Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:38:49.724099 containerd[1527]: time="2025-12-12T18:38:49.720321636Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:38:49.724099 containerd[1527]: time="2025-12-12T18:38:49.720538686Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:38:49.724099 containerd[1527]: time="2025-12-12T18:38:49.720555494Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:38:49.724099 containerd[1527]: time="2025-12-12T18:38:49.720586708Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:38:49.724099 containerd[1527]: time="2025-12-12T18:38:49.720653943Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:38:49.724099 containerd[1527]: time="2025-12-12T18:38:49.720671579Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:38:49.724099 containerd[1527]: time="2025-12-12T18:38:49.720945972Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:38:49.724099 containerd[1527]: time="2025-12-12T18:38:49.720965188Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:38:49.724099 containerd[1527]: time="2025-12-12T18:38:49.720978629Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:38:49.724099 containerd[1527]: time="2025-12-12T18:38:49.720989324Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:38:49.731109 containerd[1527]: time="2025-12-12T18:38:49.728295623Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:38:49.731109 containerd[1527]: time="2025-12-12T18:38:49.728631764Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:38:49.731109 containerd[1527]: time="2025-12-12T18:38:49.728679044Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:38:49.731109 containerd[1527]: time="2025-12-12T18:38:49.728721775Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:38:49.731109 containerd[1527]: time="2025-12-12T18:38:49.728789456Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:38:49.733476 containerd[1527]: time="2025-12-12T18:38:49.733403275Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:38:49.733746 containerd[1527]: time="2025-12-12T18:38:49.733725474Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:38:49.746228 containerd[1527]: time="2025-12-12T18:38:49.746037506Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:38:49.747109 containerd[1527]: time="2025-12-12T18:38:49.746473513Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:38:49.747109 containerd[1527]: time="2025-12-12T18:38:49.746551577Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:38:49.747109 containerd[1527]: time="2025-12-12T18:38:49.746570200Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:38:49.747109 containerd[1527]: time="2025-12-12T18:38:49.746584769Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:38:49.747109 containerd[1527]: time="2025-12-12T18:38:49.746597346Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:38:49.747109 containerd[1527]: time="2025-12-12T18:38:49.746617653Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:38:49.747109 containerd[1527]: time="2025-12-12T18:38:49.746648754Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:38:49.747109 containerd[1527]: time="2025-12-12T18:38:49.746665376Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:38:49.747109 containerd[1527]: time="2025-12-12T18:38:49.746678240Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:38:49.747109 containerd[1527]: time="2025-12-12T18:38:49.746688585Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:38:49.747109 containerd[1527]: time="2025-12-12T18:38:49.746705101Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:38:49.747109 containerd[1527]: time="2025-12-12T18:38:49.746914522Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:38:49.747109 containerd[1527]: time="2025-12-12T18:38:49.746949568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:38:49.747109 containerd[1527]: time="2025-12-12T18:38:49.746978972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:38:49.747696 containerd[1527]: time="2025-12-12T18:38:49.747001549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:38:49.747696 containerd[1527]: time="2025-12-12T18:38:49.747021215Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:38:49.747696 containerd[1527]: time="2025-12-12T18:38:49.747039611Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:38:49.747696 containerd[1527]: time="2025-12-12T18:38:49.747058454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:38:49.753093 containerd[1527]: time="2025-12-12T18:38:49.750262398Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:38:49.753093 containerd[1527]: time="2025-12-12T18:38:49.750319587Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:38:49.753093 containerd[1527]: time="2025-12-12T18:38:49.750335863Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:38:49.753093 containerd[1527]: time="2025-12-12T18:38:49.750350247Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:38:49.753093 containerd[1527]: time="2025-12-12T18:38:49.750423617Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:38:49.753093 containerd[1527]: time="2025-12-12T18:38:49.750440247Z" level=info msg="Start snapshots syncer" Dec 12 18:38:49.753093 containerd[1527]: time="2025-12-12T18:38:49.750526736Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:38:49.753399 containerd[1527]: time="2025-12-12T18:38:49.750878885Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:38:49.753399 containerd[1527]: time="2025-12-12T18:38:49.750944705Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:38:49.753629 containerd[1527]: time="2025-12-12T18:38:49.751010959Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:38:49.760101 containerd[1527]: time="2025-12-12T18:38:49.758697275Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:38:49.760101 containerd[1527]: time="2025-12-12T18:38:49.758758855Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:38:49.760101 containerd[1527]: time="2025-12-12T18:38:49.758773160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:38:49.760101 containerd[1527]: time="2025-12-12T18:38:49.758785359Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:38:49.760101 containerd[1527]: time="2025-12-12T18:38:49.758811181Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:38:49.760101 containerd[1527]: time="2025-12-12T18:38:49.758825771Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:38:49.760101 containerd[1527]: time="2025-12-12T18:38:49.758838656Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:38:49.760101 containerd[1527]: time="2025-12-12T18:38:49.758870901Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:38:49.760101 containerd[1527]: time="2025-12-12T18:38:49.758900648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:38:49.760101 containerd[1527]: time="2025-12-12T18:38:49.758928417Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:38:49.760101 containerd[1527]: time="2025-12-12T18:38:49.758978571Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:38:49.760101 containerd[1527]: time="2025-12-12T18:38:49.758997166Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:38:49.760101 containerd[1527]: time="2025-12-12T18:38:49.759007621Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:38:49.760631 containerd[1527]: time="2025-12-12T18:38:49.759017994Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:38:49.760631 containerd[1527]: time="2025-12-12T18:38:49.759029017Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:38:49.760631 containerd[1527]: time="2025-12-12T18:38:49.759039598Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:38:49.760631 containerd[1527]: time="2025-12-12T18:38:49.759060301Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:38:49.760631 containerd[1527]: time="2025-12-12T18:38:49.759094982Z" level=info msg="runtime interface created" Dec 12 18:38:49.760631 containerd[1527]: time="2025-12-12T18:38:49.759101762Z" level=info msg="created NRI interface" Dec 12 18:38:49.760631 containerd[1527]: time="2025-12-12T18:38:49.759111760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:38:49.760631 containerd[1527]: time="2025-12-12T18:38:49.759130384Z" level=info msg="Connect containerd service" Dec 12 18:38:49.760631 containerd[1527]: time="2025-12-12T18:38:49.759150751Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:38:49.770901 containerd[1527]: time="2025-12-12T18:38:49.768216684Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:38:49.853470 systemd-networkd[1437]: eth1: Gained IPv6LL Dec 12 18:38:49.858550 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:38:49.859694 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:38:49.867469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:38:49.875406 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:38:49.941981 sshd_keygen[1542]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:38:50.066875 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:38:50.090196 containerd[1527]: time="2025-12-12T18:38:50.089599360Z" level=info msg="Start subscribing containerd event" Dec 12 18:38:50.090196 containerd[1527]: time="2025-12-12T18:38:50.089920499Z" level=info msg="Start recovering state" Dec 12 18:38:50.090647 containerd[1527]: time="2025-12-12T18:38:50.089826170Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:38:50.090792 containerd[1527]: time="2025-12-12T18:38:50.090751837Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:38:50.094445 containerd[1527]: time="2025-12-12T18:38:50.094186484Z" level=info msg="Start event monitor" Dec 12 18:38:50.094445 containerd[1527]: time="2025-12-12T18:38:50.094236104Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:38:50.094445 containerd[1527]: time="2025-12-12T18:38:50.094286727Z" level=info msg="Start streaming server" Dec 12 18:38:50.094445 containerd[1527]: time="2025-12-12T18:38:50.094299291Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:38:50.094445 containerd[1527]: time="2025-12-12T18:38:50.094309719Z" level=info msg="runtime interface starting up..." Dec 12 18:38:50.094445 containerd[1527]: time="2025-12-12T18:38:50.094320672Z" level=info msg="starting plugins..." Dec 12 18:38:50.094445 containerd[1527]: time="2025-12-12T18:38:50.094342496Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:38:50.105331 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:38:50.112206 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:38:50.117817 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:38:50.120506 containerd[1527]: time="2025-12-12T18:38:50.118955392Z" level=info msg="containerd successfully booted in 0.432831s" Dec 12 18:38:50.191390 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:38:50.192477 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:38:50.192861 systemd-logind[1510]: Watching system buttons on /dev/input/event2 (Power Button) Dec 12 18:38:50.217892 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:38:50.223573 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:38:50.288233 systemd-logind[1510]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:38:50.290562 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:38:50.296870 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:38:50.302943 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:38:50.303813 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:38:50.558835 systemd-networkd[1437]: eth0: Gained IPv6LL Dec 12 18:38:50.615975 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 12 18:38:50.618663 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 12 18:38:50.652693 kernel: Console: switching to colour dummy device 80x25 Dec 12 18:38:50.652803 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 12 18:38:50.652820 kernel: [drm] features: -context_init Dec 12 18:38:50.657876 kernel: [drm] number of scanouts: 1 Dec 12 18:38:50.658324 systemd-vconsole-setup[1641]: KD_FONT_OP_SET failed, fonts will not be copied to tty4: Function not implemented Dec 12 18:38:50.658365 systemd-vconsole-setup[1641]: KD_FONT_OP_SET failed, fonts will not be copied to tty5: Function not implemented Dec 12 18:38:50.658391 systemd-vconsole-setup[1641]: KD_FONT_OP_SET failed, fonts will not be copied to tty6: Function not implemented Dec 12 18:38:50.661377 kernel: [drm] number of cap sets: 0 Dec 12 18:38:50.661936 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:50.667093 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Dec 12 18:38:50.680095 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 12 18:38:50.685294 kernel: Console: switching to colour frame buffer device 128x48 Dec 12 18:38:50.697521 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 12 18:38:50.722132 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:38:50.722340 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:50.723231 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:38:50.728817 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:38:50.734961 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:38:50.764246 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:38:50.764702 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:50.768514 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:38:50.798701 tar[1519]: linux-amd64/README.md Dec 12 18:38:50.803155 kernel: EDAC MC: Ver: 3.0.0 Dec 12 18:38:50.831562 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:38:50.859896 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:51.507337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:38:51.508202 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:38:51.511494 systemd[1]: Startup finished in 3.427s (kernel) + 5.803s (initrd) + 6.121s (userspace) = 15.352s. Dec 12 18:38:51.517439 (kubelet)[1670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:38:52.239145 kubelet[1670]: E1212 18:38:52.239086 1670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:38:52.243238 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:38:52.243419 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:38:52.243898 systemd[1]: kubelet.service: Consumed 1.312s CPU time, 265.9M memory peak. Dec 12 18:38:53.958728 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:38:53.960109 systemd[1]: Started sshd@0-143.198.226.225:22-147.75.109.163:46258.service - OpenSSH per-connection server daemon (147.75.109.163:46258). Dec 12 18:38:54.059084 sshd[1682]: Accepted publickey for core from 147.75.109.163 port 46258 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:38:54.061552 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:54.076184 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:38:54.079351 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:38:54.082808 systemd-logind[1510]: New session 1 of user core. Dec 12 18:38:54.109122 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:38:54.112135 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:38:54.128449 (systemd)[1687]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:38:54.131834 systemd-logind[1510]: New session c1 of user core. Dec 12 18:38:54.281422 systemd[1687]: Queued start job for default target default.target. Dec 12 18:38:54.296304 systemd[1687]: Created slice app.slice - User Application Slice. Dec 12 18:38:54.296527 systemd[1687]: Reached target paths.target - Paths. Dec 12 18:38:54.296645 systemd[1687]: Reached target timers.target - Timers. Dec 12 18:38:54.298249 systemd[1687]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:38:54.311833 systemd[1687]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:38:54.311955 systemd[1687]: Reached target sockets.target - Sockets. Dec 12 18:38:54.312007 systemd[1687]: Reached target basic.target - Basic System. Dec 12 18:38:54.312062 systemd[1687]: Reached target default.target - Main User Target. Dec 12 18:38:54.312096 systemd[1687]: Startup finished in 171ms. Dec 12 18:38:54.312284 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:38:54.320346 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:38:54.396522 systemd[1]: Started sshd@1-143.198.226.225:22-147.75.109.163:46262.service - OpenSSH per-connection server daemon (147.75.109.163:46262). Dec 12 18:38:54.475499 sshd[1698]: Accepted publickey for core from 147.75.109.163 port 46262 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:38:54.477474 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:54.485664 systemd-logind[1510]: New session 2 of user core. Dec 12 18:38:54.496360 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:38:54.567311 sshd[1701]: Connection closed by 147.75.109.163 port 46262 Dec 12 18:38:54.568029 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:54.579438 systemd[1]: sshd@1-143.198.226.225:22-147.75.109.163:46262.service: Deactivated successfully. Dec 12 18:38:54.583886 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:38:54.585199 systemd-logind[1510]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:38:54.590356 systemd[1]: Started sshd@2-143.198.226.225:22-147.75.109.163:46266.service - OpenSSH per-connection server daemon (147.75.109.163:46266). Dec 12 18:38:54.594025 systemd-logind[1510]: Removed session 2. Dec 12 18:38:54.658630 sshd[1707]: Accepted publickey for core from 147.75.109.163 port 46266 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:38:54.660405 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:54.669167 systemd-logind[1510]: New session 3 of user core. Dec 12 18:38:54.675397 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:38:54.735348 sshd[1710]: Connection closed by 147.75.109.163 port 46266 Dec 12 18:38:54.736305 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:54.751806 systemd[1]: sshd@2-143.198.226.225:22-147.75.109.163:46266.service: Deactivated successfully. Dec 12 18:38:54.755278 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:38:54.757243 systemd-logind[1510]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:38:54.759715 systemd-logind[1510]: Removed session 3. Dec 12 18:38:54.761472 systemd[1]: Started sshd@3-143.198.226.225:22-147.75.109.163:46280.service - OpenSSH per-connection server daemon (147.75.109.163:46280). Dec 12 18:38:54.839831 sshd[1716]: Accepted publickey for core from 147.75.109.163 port 46280 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:38:54.843493 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:54.852211 systemd-logind[1510]: New session 4 of user core. Dec 12 18:38:54.861513 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:38:54.931022 sshd[1719]: Connection closed by 147.75.109.163 port 46280 Dec 12 18:38:54.931895 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:54.943740 systemd[1]: sshd@3-143.198.226.225:22-147.75.109.163:46280.service: Deactivated successfully. Dec 12 18:38:54.946279 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:38:54.947510 systemd-logind[1510]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:38:54.952323 systemd[1]: Started sshd@4-143.198.226.225:22-147.75.109.163:46288.service - OpenSSH per-connection server daemon (147.75.109.163:46288). Dec 12 18:38:54.953716 systemd-logind[1510]: Removed session 4. Dec 12 18:38:55.024699 sshd[1725]: Accepted publickey for core from 147.75.109.163 port 46288 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:38:55.026738 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:55.035142 systemd-logind[1510]: New session 5 of user core. Dec 12 18:38:55.045537 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:38:55.177220 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:38:55.177611 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:38:55.196295 sudo[1729]: pam_unix(sudo:session): session closed for user root Dec 12 18:38:55.203100 sshd[1728]: Connection closed by 147.75.109.163 port 46288 Dec 12 18:38:55.202529 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:55.217362 systemd[1]: sshd@4-143.198.226.225:22-147.75.109.163:46288.service: Deactivated successfully. Dec 12 18:38:55.220520 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:38:55.223980 systemd-logind[1510]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:38:55.228387 systemd[1]: Started sshd@5-143.198.226.225:22-147.75.109.163:46290.service - OpenSSH per-connection server daemon (147.75.109.163:46290). Dec 12 18:38:55.231249 systemd-logind[1510]: Removed session 5. Dec 12 18:38:55.313506 sshd[1735]: Accepted publickey for core from 147.75.109.163 port 46290 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:38:55.315475 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:55.323687 systemd-logind[1510]: New session 6 of user core. Dec 12 18:38:55.343466 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:38:55.411188 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:38:55.411613 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:38:55.417397 sudo[1740]: pam_unix(sudo:session): session closed for user root Dec 12 18:38:55.425726 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:38:55.426603 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:38:55.441911 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:38:55.504085 augenrules[1762]: No rules Dec 12 18:38:55.505122 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:38:55.505452 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:38:55.507652 sudo[1739]: pam_unix(sudo:session): session closed for user root Dec 12 18:38:55.512775 sshd[1738]: Connection closed by 147.75.109.163 port 46290 Dec 12 18:38:55.513528 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:55.524198 systemd[1]: sshd@5-143.198.226.225:22-147.75.109.163:46290.service: Deactivated successfully. Dec 12 18:38:55.527738 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:38:55.529126 systemd-logind[1510]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:38:55.533019 systemd[1]: Started sshd@6-143.198.226.225:22-147.75.109.163:46302.service - OpenSSH per-connection server daemon (147.75.109.163:46302). Dec 12 18:38:55.534944 systemd-logind[1510]: Removed session 6. Dec 12 18:38:55.647157 sshd[1771]: Accepted publickey for core from 147.75.109.163 port 46302 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:38:55.661091 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:55.678513 systemd-logind[1510]: New session 7 of user core. Dec 12 18:38:55.688166 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:38:55.778123 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:38:55.779892 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:38:56.309709 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:38:56.332705 (dockerd)[1793]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:38:56.722034 dockerd[1793]: time="2025-12-12T18:38:56.721331038Z" level=info msg="Starting up" Dec 12 18:38:56.723007 dockerd[1793]: time="2025-12-12T18:38:56.722982950Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:38:56.740254 dockerd[1793]: time="2025-12-12T18:38:56.740186491Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:38:56.763165 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2615156546-merged.mount: Deactivated successfully. Dec 12 18:38:56.776652 systemd[1]: var-lib-docker-metacopy\x2dcheck3528491711-merged.mount: Deactivated successfully. Dec 12 18:38:56.805500 dockerd[1793]: time="2025-12-12T18:38:56.805204056Z" level=info msg="Loading containers: start." Dec 12 18:38:56.819080 kernel: Initializing XFRM netlink socket Dec 12 18:38:57.158357 systemd-networkd[1437]: docker0: Link UP Dec 12 18:38:57.162407 dockerd[1793]: time="2025-12-12T18:38:57.162281933Z" level=info msg="Loading containers: done." Dec 12 18:38:57.180063 dockerd[1793]: time="2025-12-12T18:38:57.179559177Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:38:57.180063 dockerd[1793]: time="2025-12-12T18:38:57.179700203Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:38:57.180063 dockerd[1793]: time="2025-12-12T18:38:57.179831571Z" level=info msg="Initializing buildkit" Dec 12 18:38:57.201759 dockerd[1793]: time="2025-12-12T18:38:57.201710055Z" level=info msg="Completed buildkit initialization" Dec 12 18:38:57.211451 dockerd[1793]: time="2025-12-12T18:38:57.211391362Z" level=info msg="Daemon has completed initialization" Dec 12 18:38:57.211632 dockerd[1793]: time="2025-12-12T18:38:57.211562748Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:38:57.211847 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:38:58.107323 containerd[1527]: time="2025-12-12T18:38:58.107207483Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 12 18:38:58.649594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount153893987.mount: Deactivated successfully. Dec 12 18:38:59.858966 containerd[1527]: time="2025-12-12T18:38:59.858876782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:59.860768 containerd[1527]: time="2025-12-12T18:38:59.860420306Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=29072183" Dec 12 18:38:59.861483 containerd[1527]: time="2025-12-12T18:38:59.861439551Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:59.864451 containerd[1527]: time="2025-12-12T18:38:59.864405359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:59.866207 containerd[1527]: time="2025-12-12T18:38:59.866153973Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 1.758873351s" Dec 12 18:38:59.866609 containerd[1527]: time="2025-12-12T18:38:59.866376641Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 12 18:38:59.867214 containerd[1527]: time="2025-12-12T18:38:59.867164067Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 12 18:39:01.776790 containerd[1527]: time="2025-12-12T18:39:01.776666079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:01.779101 containerd[1527]: time="2025-12-12T18:39:01.778416151Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24992010" Dec 12 18:39:01.780064 containerd[1527]: time="2025-12-12T18:39:01.779965600Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:01.783522 containerd[1527]: time="2025-12-12T18:39:01.783432769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:01.784781 containerd[1527]: time="2025-12-12T18:39:01.784581118Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 1.917090374s" Dec 12 18:39:01.784781 containerd[1527]: time="2025-12-12T18:39:01.784650806Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 12 18:39:01.785980 containerd[1527]: time="2025-12-12T18:39:01.785724376Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 12 18:39:02.494329 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:39:02.497681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:39:02.754805 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:39:02.770049 (kubelet)[2084]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:39:02.858801 kubelet[2084]: E1212 18:39:02.858730 2084 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:39:02.865739 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:39:02.865966 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:39:02.867141 systemd[1]: kubelet.service: Consumed 278ms CPU time, 108.8M memory peak. Dec 12 18:39:03.322830 containerd[1527]: time="2025-12-12T18:39:03.322776084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:03.325387 containerd[1527]: time="2025-12-12T18:39:03.325302553Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19404248" Dec 12 18:39:03.327061 containerd[1527]: time="2025-12-12T18:39:03.326800680Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:03.331675 containerd[1527]: time="2025-12-12T18:39:03.331112774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:03.332350 containerd[1527]: time="2025-12-12T18:39:03.332288430Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.546505173s" Dec 12 18:39:03.332350 containerd[1527]: time="2025-12-12T18:39:03.332347692Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 12 18:39:03.333064 containerd[1527]: time="2025-12-12T18:39:03.332990207Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 12 18:39:04.533713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1666636757.mount: Deactivated successfully. Dec 12 18:39:05.116329 containerd[1527]: time="2025-12-12T18:39:05.116264355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:05.118993 containerd[1527]: time="2025-12-12T18:39:05.118604440Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31161423" Dec 12 18:39:05.119883 containerd[1527]: time="2025-12-12T18:39:05.119835465Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:05.123822 containerd[1527]: time="2025-12-12T18:39:05.122649452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:05.123822 containerd[1527]: time="2025-12-12T18:39:05.123578821Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 1.790379364s" Dec 12 18:39:05.123822 containerd[1527]: time="2025-12-12T18:39:05.123626857Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 12 18:39:05.124804 containerd[1527]: time="2025-12-12T18:39:05.124771120Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 12 18:39:05.126857 systemd-resolved[1390]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Dec 12 18:39:05.634341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount75921201.mount: Deactivated successfully. Dec 12 18:39:06.568338 containerd[1527]: time="2025-12-12T18:39:06.568277845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:06.569425 containerd[1527]: time="2025-12-12T18:39:06.569388955Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Dec 12 18:39:06.570055 containerd[1527]: time="2025-12-12T18:39:06.569996588Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:06.573328 containerd[1527]: time="2025-12-12T18:39:06.573269317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:06.575070 containerd[1527]: time="2025-12-12T18:39:06.574380282Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.449438997s" Dec 12 18:39:06.575070 containerd[1527]: time="2025-12-12T18:39:06.574415064Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 12 18:39:06.575512 containerd[1527]: time="2025-12-12T18:39:06.575492453Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 18:39:07.120076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1307584812.mount: Deactivated successfully. Dec 12 18:39:07.124061 containerd[1527]: time="2025-12-12T18:39:07.123930416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:39:07.125112 containerd[1527]: time="2025-12-12T18:39:07.125076139Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 12 18:39:07.125468 containerd[1527]: time="2025-12-12T18:39:07.125411397Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:39:07.127101 containerd[1527]: time="2025-12-12T18:39:07.127026643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:39:07.128181 containerd[1527]: time="2025-12-12T18:39:07.127750341Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 552.145992ms" Dec 12 18:39:07.128181 containerd[1527]: time="2025-12-12T18:39:07.127785387Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 12 18:39:07.128360 containerd[1527]: time="2025-12-12T18:39:07.128198555Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 12 18:39:08.011972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3886588816.mount: Deactivated successfully. Dec 12 18:39:08.222226 systemd-resolved[1390]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Dec 12 18:39:09.886265 containerd[1527]: time="2025-12-12T18:39:09.886201110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:09.887349 containerd[1527]: time="2025-12-12T18:39:09.887227966Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Dec 12 18:39:09.888051 containerd[1527]: time="2025-12-12T18:39:09.887868943Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:09.891024 containerd[1527]: time="2025-12-12T18:39:09.890454989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:09.891904 containerd[1527]: time="2025-12-12T18:39:09.891862108Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.763638171s" Dec 12 18:39:09.892003 containerd[1527]: time="2025-12-12T18:39:09.891906987Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 12 18:39:12.642392 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:39:12.642555 systemd[1]: kubelet.service: Consumed 278ms CPU time, 108.8M memory peak. Dec 12 18:39:12.646334 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:39:12.677448 systemd[1]: Reload requested from client PID 2238 ('systemctl') (unit session-7.scope)... Dec 12 18:39:12.677477 systemd[1]: Reloading... Dec 12 18:39:12.820331 zram_generator::config[2281]: No configuration found. Dec 12 18:39:13.086961 systemd[1]: Reloading finished in 408 ms. Dec 12 18:39:13.144993 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:39:13.145118 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:39:13.145640 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:39:13.145718 systemd[1]: kubelet.service: Consumed 118ms CPU time, 98.2M memory peak. Dec 12 18:39:13.148335 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:39:13.307327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:39:13.322578 (kubelet)[2336]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:39:13.383446 kubelet[2336]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:39:13.383446 kubelet[2336]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:39:13.383446 kubelet[2336]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:39:13.384171 kubelet[2336]: I1212 18:39:13.383518 2336 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:39:14.070123 kubelet[2336]: I1212 18:39:14.070020 2336 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 18:39:14.070446 kubelet[2336]: I1212 18:39:14.070380 2336 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:39:14.071070 kubelet[2336]: I1212 18:39:14.071010 2336 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 18:39:14.112282 kubelet[2336]: I1212 18:39:14.111839 2336 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:39:14.113734 kubelet[2336]: E1212 18:39:14.113686 2336 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://143.198.226.225:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.226.225:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:39:14.128043 kubelet[2336]: I1212 18:39:14.127938 2336 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:39:14.132701 kubelet[2336]: I1212 18:39:14.132668 2336 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:39:14.133499 kubelet[2336]: I1212 18:39:14.133151 2336 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:39:14.133499 kubelet[2336]: I1212 18:39:14.133209 2336 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-f-e155308a0b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:39:14.134106 kubelet[2336]: I1212 18:39:14.134086 2336 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:39:14.134204 kubelet[2336]: I1212 18:39:14.134195 2336 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 18:39:14.137442 kubelet[2336]: I1212 18:39:14.137405 2336 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:39:14.140784 kubelet[2336]: I1212 18:39:14.140721 2336 kubelet.go:446] "Attempting to sync node with API server" Dec 12 18:39:14.140924 kubelet[2336]: I1212 18:39:14.140804 2336 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:39:14.140924 kubelet[2336]: I1212 18:39:14.140842 2336 kubelet.go:352] "Adding apiserver pod source" Dec 12 18:39:14.140924 kubelet[2336]: I1212 18:39:14.140859 2336 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:39:14.147726 kubelet[2336]: W1212 18:39:14.147635 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.226.225:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-f-e155308a0b&limit=500&resourceVersion=0": dial tcp 143.198.226.225:6443: connect: connection refused Dec 12 18:39:14.148008 kubelet[2336]: E1212 18:39:14.147954 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.198.226.225:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-f-e155308a0b&limit=500&resourceVersion=0\": dial tcp 143.198.226.225:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:39:14.150155 kubelet[2336]: I1212 18:39:14.150008 2336 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:39:14.154053 kubelet[2336]: I1212 18:39:14.153987 2336 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 18:39:14.154217 kubelet[2336]: W1212 18:39:14.154130 2336 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:39:14.154802 kubelet[2336]: I1212 18:39:14.154781 2336 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:39:14.154976 kubelet[2336]: I1212 18:39:14.154820 2336 server.go:1287] "Started kubelet" Dec 12 18:39:14.165681 kubelet[2336]: W1212 18:39:14.165611 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.226.225:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.198.226.225:6443: connect: connection refused Dec 12 18:39:14.166046 kubelet[2336]: E1212 18:39:14.166016 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.198.226.225:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.226.225:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:39:14.166835 kubelet[2336]: I1212 18:39:14.166732 2336 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:39:14.168301 kubelet[2336]: I1212 18:39:14.168270 2336 server.go:479] "Adding debug handlers to kubelet server" Dec 12 18:39:14.168470 kubelet[2336]: E1212 18:39:14.166698 2336 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.226.225:6443/api/v1/namespaces/default/events\": dial tcp 143.198.226.225:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-f-e155308a0b.18808bca1f58b6d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-f-e155308a0b,UID:ci-4459.2.2-f-e155308a0b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-f-e155308a0b,},FirstTimestamp:2025-12-12 18:39:14.154796752 +0000 UTC m=+0.827333865,LastTimestamp:2025-12-12 18:39:14.154796752 +0000 UTC m=+0.827333865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-f-e155308a0b,}" Dec 12 18:39:14.169412 kubelet[2336]: I1212 18:39:14.169069 2336 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:39:14.171331 kubelet[2336]: I1212 18:39:14.170500 2336 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:39:14.171331 kubelet[2336]: I1212 18:39:14.170864 2336 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:39:14.173477 kubelet[2336]: I1212 18:39:14.172775 2336 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:39:14.176075 kubelet[2336]: E1212 18:39:14.176041 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-f-e155308a0b\" not found" Dec 12 18:39:14.176075 kubelet[2336]: I1212 18:39:14.176084 2336 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:39:14.176552 kubelet[2336]: I1212 18:39:14.176357 2336 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:39:14.176552 kubelet[2336]: I1212 18:39:14.176439 2336 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:39:14.177172 kubelet[2336]: W1212 18:39:14.176979 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.226.225:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.226.225:6443: connect: connection refused Dec 12 18:39:14.177172 kubelet[2336]: E1212 18:39:14.177086 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.198.226.225:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.226.225:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:39:14.177763 kubelet[2336]: E1212 18:39:14.177486 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.226.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-f-e155308a0b?timeout=10s\": dial tcp 143.198.226.225:6443: connect: connection refused" interval="200ms" Dec 12 18:39:14.182274 kubelet[2336]: I1212 18:39:14.181362 2336 factory.go:221] Registration of the systemd container factory successfully Dec 12 18:39:14.182274 kubelet[2336]: I1212 18:39:14.181519 2336 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:39:14.192763 kubelet[2336]: I1212 18:39:14.192739 2336 factory.go:221] Registration of the containerd container factory successfully Dec 12 18:39:14.194183 kubelet[2336]: E1212 18:39:14.194147 2336 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:39:14.207557 kubelet[2336]: I1212 18:39:14.207528 2336 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:39:14.207712 kubelet[2336]: I1212 18:39:14.207569 2336 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:39:14.207712 kubelet[2336]: I1212 18:39:14.207602 2336 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:39:14.209839 kubelet[2336]: I1212 18:39:14.209727 2336 policy_none.go:49] "None policy: Start" Dec 12 18:39:14.209839 kubelet[2336]: I1212 18:39:14.209783 2336 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:39:14.209839 kubelet[2336]: I1212 18:39:14.209805 2336 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:39:14.217400 kubelet[2336]: I1212 18:39:14.217173 2336 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 18:39:14.222147 kubelet[2336]: I1212 18:39:14.222109 2336 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 18:39:14.222301 kubelet[2336]: I1212 18:39:14.222291 2336 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 18:39:14.222379 kubelet[2336]: I1212 18:39:14.222365 2336 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:39:14.222882 kubelet[2336]: I1212 18:39:14.222458 2336 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 18:39:14.222882 kubelet[2336]: E1212 18:39:14.222549 2336 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:39:14.231590 kubelet[2336]: W1212 18:39:14.231189 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.226.225:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.226.225:6443: connect: connection refused Dec 12 18:39:14.231749 kubelet[2336]: E1212 18:39:14.231633 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.198.226.225:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.226.225:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:39:14.233099 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:39:14.246880 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:39:14.252406 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:39:14.264677 kubelet[2336]: I1212 18:39:14.264634 2336 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 18:39:14.265044 kubelet[2336]: I1212 18:39:14.264948 2336 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:39:14.265044 kubelet[2336]: I1212 18:39:14.264973 2336 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:39:14.265853 kubelet[2336]: I1212 18:39:14.265707 2336 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:39:14.268196 kubelet[2336]: E1212 18:39:14.268164 2336 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:39:14.268353 kubelet[2336]: E1212 18:39:14.268287 2336 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-f-e155308a0b\" not found" Dec 12 18:39:14.341858 systemd[1]: Created slice kubepods-burstable-pod23a8465b6894249161f19a0182e3d065.slice - libcontainer container kubepods-burstable-pod23a8465b6894249161f19a0182e3d065.slice. Dec 12 18:39:14.361456 kubelet[2336]: E1212 18:39:14.361167 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-f-e155308a0b\" not found" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:14.362845 systemd[1]: Created slice kubepods-burstable-podab86fd7b64a08793b88260b55e53343f.slice - libcontainer container kubepods-burstable-podab86fd7b64a08793b88260b55e53343f.slice. Dec 12 18:39:14.367093 kubelet[2336]: E1212 18:39:14.366957 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-f-e155308a0b\" not found" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:14.367093 kubelet[2336]: I1212 18:39:14.367084 2336 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:14.369897 kubelet[2336]: E1212 18:39:14.369858 2336 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.226.225:6443/api/v1/nodes\": dial tcp 143.198.226.225:6443: connect: connection refused" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:14.371611 systemd[1]: Created slice kubepods-burstable-pod2cbdf3d1e3857880f2b78d67dfa31d0f.slice - libcontainer container kubepods-burstable-pod2cbdf3d1e3857880f2b78d67dfa31d0f.slice. Dec 12 18:39:14.374221 kubelet[2336]: E1212 18:39:14.374048 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-f-e155308a0b\" not found" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:14.378954 kubelet[2336]: E1212 18:39:14.378889 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.226.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-f-e155308a0b?timeout=10s\": dial tcp 143.198.226.225:6443: connect: connection refused" interval="400ms" Dec 12 18:39:14.478327 kubelet[2336]: I1212 18:39:14.478247 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2cbdf3d1e3857880f2b78d67dfa31d0f-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-f-e155308a0b\" (UID: \"2cbdf3d1e3857880f2b78d67dfa31d0f\") " pod="kube-system/kube-scheduler-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:14.478327 kubelet[2336]: I1212 18:39:14.478304 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a8465b6894249161f19a0182e3d065-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-f-e155308a0b\" (UID: \"23a8465b6894249161f19a0182e3d065\") " pod="kube-system/kube-apiserver-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:14.478327 kubelet[2336]: I1212 18:39:14.478337 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ab86fd7b64a08793b88260b55e53343f-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-f-e155308a0b\" (UID: \"ab86fd7b64a08793b88260b55e53343f\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:14.479075 kubelet[2336]: I1212 18:39:14.478364 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab86fd7b64a08793b88260b55e53343f-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-f-e155308a0b\" (UID: \"ab86fd7b64a08793b88260b55e53343f\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:14.479075 kubelet[2336]: I1212 18:39:14.478390 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab86fd7b64a08793b88260b55e53343f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-f-e155308a0b\" (UID: \"ab86fd7b64a08793b88260b55e53343f\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:14.479075 kubelet[2336]: I1212 18:39:14.478424 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a8465b6894249161f19a0182e3d065-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-f-e155308a0b\" (UID: \"23a8465b6894249161f19a0182e3d065\") " pod="kube-system/kube-apiserver-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:14.479075 kubelet[2336]: I1212 18:39:14.478439 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a8465b6894249161f19a0182e3d065-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-f-e155308a0b\" (UID: \"23a8465b6894249161f19a0182e3d065\") " pod="kube-system/kube-apiserver-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:14.479075 kubelet[2336]: I1212 18:39:14.478459 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab86fd7b64a08793b88260b55e53343f-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-f-e155308a0b\" (UID: \"ab86fd7b64a08793b88260b55e53343f\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:14.479325 kubelet[2336]: I1212 18:39:14.478475 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab86fd7b64a08793b88260b55e53343f-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-f-e155308a0b\" (UID: \"ab86fd7b64a08793b88260b55e53343f\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:14.571971 kubelet[2336]: I1212 18:39:14.571765 2336 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:14.572397 kubelet[2336]: E1212 18:39:14.572352 2336 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.226.225:6443/api/v1/nodes\": dial tcp 143.198.226.225:6443: connect: connection refused" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:14.661920 kubelet[2336]: E1212 18:39:14.661774 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:14.664436 containerd[1527]: time="2025-12-12T18:39:14.664360558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-f-e155308a0b,Uid:23a8465b6894249161f19a0182e3d065,Namespace:kube-system,Attempt:0,}" Dec 12 18:39:14.669546 kubelet[2336]: E1212 18:39:14.669490 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:14.670608 containerd[1527]: time="2025-12-12T18:39:14.670320707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-f-e155308a0b,Uid:ab86fd7b64a08793b88260b55e53343f,Namespace:kube-system,Attempt:0,}" Dec 12 18:39:14.675232 kubelet[2336]: E1212 18:39:14.675173 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:14.676090 containerd[1527]: time="2025-12-12T18:39:14.675811138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-f-e155308a0b,Uid:2cbdf3d1e3857880f2b78d67dfa31d0f,Namespace:kube-system,Attempt:0,}" Dec 12 18:39:14.780179 kubelet[2336]: E1212 18:39:14.779621 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.226.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-f-e155308a0b?timeout=10s\": dial tcp 143.198.226.225:6443: connect: connection refused" interval="800ms" Dec 12 18:39:14.811578 containerd[1527]: time="2025-12-12T18:39:14.811475130Z" level=info msg="connecting to shim ad0420cc911c746c0fec88c40876057a37f1cf1f0332d9bcb75a5106a4862c11" address="unix:///run/containerd/s/b835455bde82b9f8c6928912371445a3219b4f8bca94a57c707313bcf3a80dd3" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:14.813405 containerd[1527]: time="2025-12-12T18:39:14.813288155Z" level=info msg="connecting to shim dcfb4cc76a06ec7c6ef096bd6c0d197a6d6f9d5933efa65a29de33ac54f79ef5" address="unix:///run/containerd/s/80f245446e68cd9ac645682eb0a712cb1e02ce419b6ca68fb5443ef1b5fd347f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:14.819425 containerd[1527]: time="2025-12-12T18:39:14.819307994Z" level=info msg="connecting to shim 2e03b60b6daeb8c16c61205058f06840787a3ca46b3b3ddf217bbb2691dd1ac6" address="unix:///run/containerd/s/fbeba766a270ee1eb334d0ad92bc4044627f0a890e55c7e4dafb79cd88de85d8" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:14.938687 systemd[1]: Started cri-containerd-2e03b60b6daeb8c16c61205058f06840787a3ca46b3b3ddf217bbb2691dd1ac6.scope - libcontainer container 2e03b60b6daeb8c16c61205058f06840787a3ca46b3b3ddf217bbb2691dd1ac6. Dec 12 18:39:14.955561 systemd[1]: Started cri-containerd-ad0420cc911c746c0fec88c40876057a37f1cf1f0332d9bcb75a5106a4862c11.scope - libcontainer container ad0420cc911c746c0fec88c40876057a37f1cf1f0332d9bcb75a5106a4862c11. Dec 12 18:39:14.958714 systemd[1]: Started cri-containerd-dcfb4cc76a06ec7c6ef096bd6c0d197a6d6f9d5933efa65a29de33ac54f79ef5.scope - libcontainer container dcfb4cc76a06ec7c6ef096bd6c0d197a6d6f9d5933efa65a29de33ac54f79ef5. Dec 12 18:39:14.977254 kubelet[2336]: I1212 18:39:14.976969 2336 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:14.978446 kubelet[2336]: E1212 18:39:14.977746 2336 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.226.225:6443/api/v1/nodes\": dial tcp 143.198.226.225:6443: connect: connection refused" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:15.081937 containerd[1527]: time="2025-12-12T18:39:15.081768959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-f-e155308a0b,Uid:23a8465b6894249161f19a0182e3d065,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcfb4cc76a06ec7c6ef096bd6c0d197a6d6f9d5933efa65a29de33ac54f79ef5\"" Dec 12 18:39:15.085699 kubelet[2336]: E1212 18:39:15.084343 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:15.088055 containerd[1527]: time="2025-12-12T18:39:15.088008265Z" level=info msg="CreateContainer within sandbox \"dcfb4cc76a06ec7c6ef096bd6c0d197a6d6f9d5933efa65a29de33ac54f79ef5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:39:15.106692 containerd[1527]: time="2025-12-12T18:39:15.106654830Z" level=info msg="Container c910b367e3a6ac245f84d7d5a174ae4ad261adb3ede1b9323262acc0b036b965: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:15.110568 containerd[1527]: time="2025-12-12T18:39:15.110445486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-f-e155308a0b,Uid:ab86fd7b64a08793b88260b55e53343f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad0420cc911c746c0fec88c40876057a37f1cf1f0332d9bcb75a5106a4862c11\"" Dec 12 18:39:15.112061 kubelet[2336]: E1212 18:39:15.111964 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:15.113871 containerd[1527]: time="2025-12-12T18:39:15.113768248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-f-e155308a0b,Uid:2cbdf3d1e3857880f2b78d67dfa31d0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e03b60b6daeb8c16c61205058f06840787a3ca46b3b3ddf217bbb2691dd1ac6\"" Dec 12 18:39:15.115376 kubelet[2336]: E1212 18:39:15.115341 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:15.117623 containerd[1527]: time="2025-12-12T18:39:15.117459918Z" level=info msg="CreateContainer within sandbox \"dcfb4cc76a06ec7c6ef096bd6c0d197a6d6f9d5933efa65a29de33ac54f79ef5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c910b367e3a6ac245f84d7d5a174ae4ad261adb3ede1b9323262acc0b036b965\"" Dec 12 18:39:15.118190 containerd[1527]: time="2025-12-12T18:39:15.117903464Z" level=info msg="CreateContainer within sandbox \"ad0420cc911c746c0fec88c40876057a37f1cf1f0332d9bcb75a5106a4862c11\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:39:15.118545 containerd[1527]: time="2025-12-12T18:39:15.118516754Z" level=info msg="StartContainer for \"c910b367e3a6ac245f84d7d5a174ae4ad261adb3ede1b9323262acc0b036b965\"" Dec 12 18:39:15.119327 containerd[1527]: time="2025-12-12T18:39:15.118862490Z" level=info msg="CreateContainer within sandbox \"2e03b60b6daeb8c16c61205058f06840787a3ca46b3b3ddf217bbb2691dd1ac6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:39:15.121722 containerd[1527]: time="2025-12-12T18:39:15.121633679Z" level=info msg="connecting to shim c910b367e3a6ac245f84d7d5a174ae4ad261adb3ede1b9323262acc0b036b965" address="unix:///run/containerd/s/80f245446e68cd9ac645682eb0a712cb1e02ce419b6ca68fb5443ef1b5fd347f" protocol=ttrpc version=3 Dec 12 18:39:15.133364 containerd[1527]: time="2025-12-12T18:39:15.133263526Z" level=info msg="Container f1b490c5e019128709f8bc4e119c25c795388e4e372e2a0cb7eeb70f048b36e0: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:15.134058 containerd[1527]: time="2025-12-12T18:39:15.133997507Z" level=info msg="Container 3025e8d7724489b7e9548a07abd0ee4c5fae457d03e5072d886ccd9169cc4239: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:15.146974 containerd[1527]: time="2025-12-12T18:39:15.146913492Z" level=info msg="CreateContainer within sandbox \"2e03b60b6daeb8c16c61205058f06840787a3ca46b3b3ddf217bbb2691dd1ac6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f1b490c5e019128709f8bc4e119c25c795388e4e372e2a0cb7eeb70f048b36e0\"" Dec 12 18:39:15.147970 containerd[1527]: time="2025-12-12T18:39:15.147922495Z" level=info msg="StartContainer for \"f1b490c5e019128709f8bc4e119c25c795388e4e372e2a0cb7eeb70f048b36e0\"" Dec 12 18:39:15.151278 containerd[1527]: time="2025-12-12T18:39:15.150833659Z" level=info msg="connecting to shim f1b490c5e019128709f8bc4e119c25c795388e4e372e2a0cb7eeb70f048b36e0" address="unix:///run/containerd/s/fbeba766a270ee1eb334d0ad92bc4044627f0a890e55c7e4dafb79cd88de85d8" protocol=ttrpc version=3 Dec 12 18:39:15.153010 containerd[1527]: time="2025-12-12T18:39:15.152967510Z" level=info msg="CreateContainer within sandbox \"ad0420cc911c746c0fec88c40876057a37f1cf1f0332d9bcb75a5106a4862c11\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3025e8d7724489b7e9548a07abd0ee4c5fae457d03e5072d886ccd9169cc4239\"" Dec 12 18:39:15.154059 containerd[1527]: time="2025-12-12T18:39:15.153974461Z" level=info msg="StartContainer for \"3025e8d7724489b7e9548a07abd0ee4c5fae457d03e5072d886ccd9169cc4239\"" Dec 12 18:39:15.155618 systemd[1]: Started cri-containerd-c910b367e3a6ac245f84d7d5a174ae4ad261adb3ede1b9323262acc0b036b965.scope - libcontainer container c910b367e3a6ac245f84d7d5a174ae4ad261adb3ede1b9323262acc0b036b965. Dec 12 18:39:15.159185 containerd[1527]: time="2025-12-12T18:39:15.159095155Z" level=info msg="connecting to shim 3025e8d7724489b7e9548a07abd0ee4c5fae457d03e5072d886ccd9169cc4239" address="unix:///run/containerd/s/b835455bde82b9f8c6928912371445a3219b4f8bca94a57c707313bcf3a80dd3" protocol=ttrpc version=3 Dec 12 18:39:15.190258 kubelet[2336]: W1212 18:39:15.190079 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.226.225:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-f-e155308a0b&limit=500&resourceVersion=0": dial tcp 143.198.226.225:6443: connect: connection refused Dec 12 18:39:15.190258 kubelet[2336]: E1212 18:39:15.190188 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.198.226.225:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-f-e155308a0b&limit=500&resourceVersion=0\": dial tcp 143.198.226.225:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:39:15.193383 systemd[1]: Started cri-containerd-f1b490c5e019128709f8bc4e119c25c795388e4e372e2a0cb7eeb70f048b36e0.scope - libcontainer container f1b490c5e019128709f8bc4e119c25c795388e4e372e2a0cb7eeb70f048b36e0. Dec 12 18:39:15.218152 systemd[1]: Started cri-containerd-3025e8d7724489b7e9548a07abd0ee4c5fae457d03e5072d886ccd9169cc4239.scope - libcontainer container 3025e8d7724489b7e9548a07abd0ee4c5fae457d03e5072d886ccd9169cc4239. Dec 12 18:39:15.302935 containerd[1527]: time="2025-12-12T18:39:15.302877200Z" level=info msg="StartContainer for \"c910b367e3a6ac245f84d7d5a174ae4ad261adb3ede1b9323262acc0b036b965\" returns successfully" Dec 12 18:39:15.357802 containerd[1527]: time="2025-12-12T18:39:15.357755127Z" level=info msg="StartContainer for \"3025e8d7724489b7e9548a07abd0ee4c5fae457d03e5072d886ccd9169cc4239\" returns successfully" Dec 12 18:39:15.385399 containerd[1527]: time="2025-12-12T18:39:15.385302131Z" level=info msg="StartContainer for \"f1b490c5e019128709f8bc4e119c25c795388e4e372e2a0cb7eeb70f048b36e0\" returns successfully" Dec 12 18:39:15.411154 kubelet[2336]: W1212 18:39:15.411069 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.226.225:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.198.226.225:6443: connect: connection refused Dec 12 18:39:15.411324 kubelet[2336]: E1212 18:39:15.411177 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.198.226.225:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.226.225:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:39:15.447390 kubelet[2336]: W1212 18:39:15.447207 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.226.225:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.226.225:6443: connect: connection refused Dec 12 18:39:15.447390 kubelet[2336]: E1212 18:39:15.447281 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.198.226.225:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.226.225:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:39:15.452238 kubelet[2336]: W1212 18:39:15.452163 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.226.225:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.226.225:6443: connect: connection refused Dec 12 18:39:15.452389 kubelet[2336]: E1212 18:39:15.452252 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.198.226.225:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.226.225:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:39:15.780659 kubelet[2336]: I1212 18:39:15.780618 2336 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:16.287456 kubelet[2336]: E1212 18:39:16.287278 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-f-e155308a0b\" not found" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:16.288262 kubelet[2336]: E1212 18:39:16.287613 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:16.288262 kubelet[2336]: E1212 18:39:16.287944 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-f-e155308a0b\" not found" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:16.288262 kubelet[2336]: E1212 18:39:16.288067 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:16.290574 kubelet[2336]: E1212 18:39:16.290551 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-f-e155308a0b\" not found" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:16.290960 kubelet[2336]: E1212 18:39:16.290908 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:17.294770 kubelet[2336]: E1212 18:39:17.294734 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-f-e155308a0b\" not found" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:17.295693 kubelet[2336]: E1212 18:39:17.294885 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:17.295693 kubelet[2336]: E1212 18:39:17.295154 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-f-e155308a0b\" not found" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:17.295693 kubelet[2336]: E1212 18:39:17.295255 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:17.297714 kubelet[2336]: E1212 18:39:17.296241 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-f-e155308a0b\" not found" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:17.297714 kubelet[2336]: E1212 18:39:17.296349 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:18.203311 kubelet[2336]: E1212 18:39:18.203252 2336 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.2-f-e155308a0b\" not found" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:18.259747 kubelet[2336]: I1212 18:39:18.259690 2336 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:18.277863 kubelet[2336]: I1212 18:39:18.277775 2336 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:18.294619 kubelet[2336]: I1212 18:39:18.294578 2336 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:18.297267 kubelet[2336]: I1212 18:39:18.297223 2336 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:18.316257 kubelet[2336]: E1212 18:39:18.316209 2336 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-f-e155308a0b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:18.316451 kubelet[2336]: E1212 18:39:18.316421 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:18.316513 kubelet[2336]: E1212 18:39:18.315021 2336 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-f-e155308a0b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:18.316622 kubelet[2336]: E1212 18:39:18.316595 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:18.332727 kubelet[2336]: E1212 18:39:18.332397 2336 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-f-e155308a0b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:18.332727 kubelet[2336]: I1212 18:39:18.332438 2336 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:18.338073 kubelet[2336]: E1212 18:39:18.338012 2336 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-f-e155308a0b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:18.338073 kubelet[2336]: I1212 18:39:18.338075 2336 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:18.349643 kubelet[2336]: E1212 18:39:18.349584 2336 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-f-e155308a0b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:19.154359 kubelet[2336]: I1212 18:39:19.154289 2336 apiserver.go:52] "Watching apiserver" Dec 12 18:39:19.176844 kubelet[2336]: I1212 18:39:19.176781 2336 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:39:20.141060 kubelet[2336]: I1212 18:39:20.140905 2336 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:20.150922 kubelet[2336]: W1212 18:39:20.150639 2336 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:39:20.151184 kubelet[2336]: E1212 18:39:20.151148 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:20.300134 kubelet[2336]: E1212 18:39:20.300092 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:20.789614 systemd[1]: Reload requested from client PID 2608 ('systemctl') (unit session-7.scope)... Dec 12 18:39:20.789632 systemd[1]: Reloading... Dec 12 18:39:20.914061 zram_generator::config[2655]: No configuration found. Dec 12 18:39:21.175299 systemd[1]: Reloading finished in 385 ms. Dec 12 18:39:21.201294 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:39:21.220973 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:39:21.221250 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:39:21.221316 systemd[1]: kubelet.service: Consumed 1.369s CPU time, 124.6M memory peak. Dec 12 18:39:21.223583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:39:21.390580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:39:21.401599 (kubelet)[2702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:39:21.497004 kubelet[2702]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:39:21.497004 kubelet[2702]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:39:21.497004 kubelet[2702]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:39:21.498057 kubelet[2702]: I1212 18:39:21.497615 2702 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:39:21.506930 kubelet[2702]: I1212 18:39:21.506671 2702 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 18:39:21.506930 kubelet[2702]: I1212 18:39:21.506913 2702 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:39:21.507411 kubelet[2702]: I1212 18:39:21.507383 2702 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 18:39:21.510630 kubelet[2702]: I1212 18:39:21.510591 2702 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 18:39:21.523710 kubelet[2702]: I1212 18:39:21.522585 2702 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:39:21.532926 kubelet[2702]: I1212 18:39:21.532880 2702 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:39:21.536822 kubelet[2702]: I1212 18:39:21.536787 2702 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:39:21.537488 kubelet[2702]: I1212 18:39:21.537246 2702 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:39:21.537672 kubelet[2702]: I1212 18:39:21.537277 2702 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-f-e155308a0b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:39:21.537940 kubelet[2702]: I1212 18:39:21.537910 2702 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:39:21.538113 kubelet[2702]: I1212 18:39:21.538101 2702 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 18:39:21.538238 kubelet[2702]: I1212 18:39:21.538220 2702 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:39:21.538454 kubelet[2702]: I1212 18:39:21.538426 2702 kubelet.go:446] "Attempting to sync node with API server" Dec 12 18:39:21.538454 kubelet[2702]: I1212 18:39:21.538453 2702 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:39:21.539045 kubelet[2702]: I1212 18:39:21.538977 2702 kubelet.go:352] "Adding apiserver pod source" Dec 12 18:39:21.539045 kubelet[2702]: I1212 18:39:21.539003 2702 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:39:21.544367 kubelet[2702]: I1212 18:39:21.544316 2702 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:39:21.548052 kubelet[2702]: I1212 18:39:21.547261 2702 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 18:39:21.548052 kubelet[2702]: I1212 18:39:21.547781 2702 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:39:21.548052 kubelet[2702]: I1212 18:39:21.547823 2702 server.go:1287] "Started kubelet" Dec 12 18:39:21.548052 kubelet[2702]: I1212 18:39:21.547912 2702 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:39:21.548549 kubelet[2702]: I1212 18:39:21.548485 2702 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:39:21.548881 kubelet[2702]: I1212 18:39:21.548867 2702 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:39:21.549966 kubelet[2702]: I1212 18:39:21.549942 2702 server.go:479] "Adding debug handlers to kubelet server" Dec 12 18:39:21.557226 kubelet[2702]: I1212 18:39:21.557193 2702 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:39:21.571433 kubelet[2702]: I1212 18:39:21.571273 2702 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:39:21.572940 kubelet[2702]: E1212 18:39:21.572917 2702 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:39:21.573827 kubelet[2702]: I1212 18:39:21.573803 2702 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:39:21.574046 kubelet[2702]: I1212 18:39:21.574019 2702 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:39:21.574225 kubelet[2702]: I1212 18:39:21.574212 2702 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:39:21.574974 kubelet[2702]: I1212 18:39:21.574953 2702 factory.go:221] Registration of the systemd container factory successfully Dec 12 18:39:21.575196 kubelet[2702]: I1212 18:39:21.575171 2702 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:39:21.578740 kubelet[2702]: I1212 18:39:21.578703 2702 factory.go:221] Registration of the containerd container factory successfully Dec 12 18:39:21.585059 kubelet[2702]: I1212 18:39:21.584525 2702 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 18:39:21.588202 kubelet[2702]: I1212 18:39:21.588082 2702 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 18:39:21.589253 kubelet[2702]: I1212 18:39:21.589230 2702 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 18:39:21.589427 kubelet[2702]: I1212 18:39:21.589416 2702 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:39:21.589916 kubelet[2702]: I1212 18:39:21.589471 2702 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 18:39:21.589916 kubelet[2702]: E1212 18:39:21.589537 2702 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:39:21.643174 kubelet[2702]: I1212 18:39:21.643142 2702 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:39:21.643359 kubelet[2702]: I1212 18:39:21.643346 2702 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:39:21.643434 kubelet[2702]: I1212 18:39:21.643426 2702 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:39:21.643662 kubelet[2702]: I1212 18:39:21.643647 2702 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:39:21.643732 kubelet[2702]: I1212 18:39:21.643712 2702 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:39:21.643778 kubelet[2702]: I1212 18:39:21.643772 2702 policy_none.go:49] "None policy: Start" Dec 12 18:39:21.643828 kubelet[2702]: I1212 18:39:21.643821 2702 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:39:21.643876 kubelet[2702]: I1212 18:39:21.643869 2702 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:39:21.644025 kubelet[2702]: I1212 18:39:21.644016 2702 state_mem.go:75] "Updated machine memory state" Dec 12 18:39:21.649056 kubelet[2702]: I1212 18:39:21.649002 2702 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 18:39:21.649781 kubelet[2702]: I1212 18:39:21.649735 2702 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:39:21.649970 kubelet[2702]: I1212 18:39:21.649751 2702 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:39:21.650548 kubelet[2702]: I1212 18:39:21.650510 2702 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:39:21.654585 kubelet[2702]: E1212 18:39:21.654255 2702 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:39:21.690352 kubelet[2702]: I1212 18:39:21.690268 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:21.692269 kubelet[2702]: I1212 18:39:21.692232 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:21.693024 kubelet[2702]: I1212 18:39:21.692872 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:21.700437 kubelet[2702]: W1212 18:39:21.700256 2702 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:39:21.700728 kubelet[2702]: W1212 18:39:21.700311 2702 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:39:21.700893 kubelet[2702]: E1212 18:39:21.700857 2702 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-f-e155308a0b\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:21.701281 kubelet[2702]: W1212 18:39:21.700465 2702 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:39:21.763089 kubelet[2702]: I1212 18:39:21.762853 2702 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:21.781060 kubelet[2702]: I1212 18:39:21.779941 2702 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:21.781060 kubelet[2702]: I1212 18:39:21.780120 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a8465b6894249161f19a0182e3d065-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-f-e155308a0b\" (UID: \"23a8465b6894249161f19a0182e3d065\") " pod="kube-system/kube-apiserver-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:21.781060 kubelet[2702]: I1212 18:39:21.780187 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ab86fd7b64a08793b88260b55e53343f-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-f-e155308a0b\" (UID: \"ab86fd7b64a08793b88260b55e53343f\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:21.781060 kubelet[2702]: I1212 18:39:21.780218 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab86fd7b64a08793b88260b55e53343f-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-f-e155308a0b\" (UID: \"ab86fd7b64a08793b88260b55e53343f\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:21.781060 kubelet[2702]: I1212 18:39:21.780260 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab86fd7b64a08793b88260b55e53343f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-f-e155308a0b\" (UID: \"ab86fd7b64a08793b88260b55e53343f\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:21.781397 kubelet[2702]: I1212 18:39:21.780280 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2cbdf3d1e3857880f2b78d67dfa31d0f-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-f-e155308a0b\" (UID: \"2cbdf3d1e3857880f2b78d67dfa31d0f\") " pod="kube-system/kube-scheduler-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:21.781397 kubelet[2702]: I1212 18:39:21.780297 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a8465b6894249161f19a0182e3d065-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-f-e155308a0b\" (UID: \"23a8465b6894249161f19a0182e3d065\") " pod="kube-system/kube-apiserver-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:21.781397 kubelet[2702]: I1212 18:39:21.780326 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab86fd7b64a08793b88260b55e53343f-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-f-e155308a0b\" (UID: \"ab86fd7b64a08793b88260b55e53343f\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:21.781397 kubelet[2702]: I1212 18:39:21.780351 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab86fd7b64a08793b88260b55e53343f-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-f-e155308a0b\" (UID: \"ab86fd7b64a08793b88260b55e53343f\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:21.781397 kubelet[2702]: I1212 18:39:21.780395 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a8465b6894249161f19a0182e3d065-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-f-e155308a0b\" (UID: \"23a8465b6894249161f19a0182e3d065\") " pod="kube-system/kube-apiserver-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:21.782052 kubelet[2702]: I1212 18:39:21.781620 2702 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-f-e155308a0b" Dec 12 18:39:22.002543 kubelet[2702]: E1212 18:39:22.002502 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:22.003937 kubelet[2702]: E1212 18:39:22.002875 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:22.004561 kubelet[2702]: E1212 18:39:22.002958 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:22.541467 kubelet[2702]: I1212 18:39:22.541115 2702 apiserver.go:52] "Watching apiserver" Dec 12 18:39:22.574795 kubelet[2702]: I1212 18:39:22.574743 2702 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:39:22.629005 kubelet[2702]: I1212 18:39:22.627635 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:22.629005 kubelet[2702]: I1212 18:39:22.628510 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:22.629549 kubelet[2702]: E1212 18:39:22.629514 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:22.646888 kubelet[2702]: W1212 18:39:22.646803 2702 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:39:22.647121 kubelet[2702]: E1212 18:39:22.646936 2702 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-f-e155308a0b\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:22.648472 kubelet[2702]: E1212 18:39:22.648416 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:22.652615 kubelet[2702]: W1212 18:39:22.651592 2702 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:39:22.652615 kubelet[2702]: E1212 18:39:22.651659 2702 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-f-e155308a0b\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-f-e155308a0b" Dec 12 18:39:22.652615 kubelet[2702]: E1212 18:39:22.651953 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:22.771731 kubelet[2702]: I1212 18:39:22.771630 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-f-e155308a0b" podStartSLOduration=2.7716021189999998 podStartE2EDuration="2.771602119s" podCreationTimestamp="2025-12-12 18:39:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:39:22.737289507 +0000 UTC m=+1.326952693" watchObservedRunningTime="2025-12-12 18:39:22.771602119 +0000 UTC m=+1.361265295" Dec 12 18:39:22.801822 kubelet[2702]: I1212 18:39:22.801616 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-f-e155308a0b" podStartSLOduration=1.801590582 podStartE2EDuration="1.801590582s" podCreationTimestamp="2025-12-12 18:39:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:39:22.773094611 +0000 UTC m=+1.362757796" watchObservedRunningTime="2025-12-12 18:39:22.801590582 +0000 UTC m=+1.391253760" Dec 12 18:39:22.826087 kubelet[2702]: I1212 18:39:22.825999 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-f-e155308a0b" podStartSLOduration=1.82597718 podStartE2EDuration="1.82597718s" podCreationTimestamp="2025-12-12 18:39:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:39:22.802427004 +0000 UTC m=+1.392090190" watchObservedRunningTime="2025-12-12 18:39:22.82597718 +0000 UTC m=+1.415640369" Dec 12 18:39:23.630098 kubelet[2702]: E1212 18:39:23.628909 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:23.630098 kubelet[2702]: E1212 18:39:23.628908 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:23.630098 kubelet[2702]: E1212 18:39:23.629199 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:24.631069 kubelet[2702]: E1212 18:39:24.630798 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:25.397125 kubelet[2702]: E1212 18:39:25.397051 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:25.634054 kubelet[2702]: E1212 18:39:25.633966 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:26.457426 kubelet[2702]: I1212 18:39:26.457380 2702 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:39:26.458119 containerd[1527]: time="2025-12-12T18:39:26.458068991Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:39:26.458676 kubelet[2702]: I1212 18:39:26.458350 2702 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:39:26.790630 systemd[1]: Created slice kubepods-besteffort-poda9d040ac_e2ca_4c1d_b1cc_9ef7613abad3.slice - libcontainer container kubepods-besteffort-poda9d040ac_e2ca_4c1d_b1cc_9ef7613abad3.slice. Dec 12 18:39:26.814864 kubelet[2702]: I1212 18:39:26.814758 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9d040ac-e2ca-4c1d-b1cc-9ef7613abad3-lib-modules\") pod \"kube-proxy-6k4px\" (UID: \"a9d040ac-e2ca-4c1d-b1cc-9ef7613abad3\") " pod="kube-system/kube-proxy-6k4px" Dec 12 18:39:26.814864 kubelet[2702]: I1212 18:39:26.814831 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bjv4\" (UniqueName: \"kubernetes.io/projected/a9d040ac-e2ca-4c1d-b1cc-9ef7613abad3-kube-api-access-2bjv4\") pod \"kube-proxy-6k4px\" (UID: \"a9d040ac-e2ca-4c1d-b1cc-9ef7613abad3\") " pod="kube-system/kube-proxy-6k4px" Dec 12 18:39:26.814864 kubelet[2702]: I1212 18:39:26.814874 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a9d040ac-e2ca-4c1d-b1cc-9ef7613abad3-kube-proxy\") pod \"kube-proxy-6k4px\" (UID: \"a9d040ac-e2ca-4c1d-b1cc-9ef7613abad3\") " pod="kube-system/kube-proxy-6k4px" Dec 12 18:39:26.815550 kubelet[2702]: I1212 18:39:26.814899 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9d040ac-e2ca-4c1d-b1cc-9ef7613abad3-xtables-lock\") pod \"kube-proxy-6k4px\" (UID: \"a9d040ac-e2ca-4c1d-b1cc-9ef7613abad3\") " pod="kube-system/kube-proxy-6k4px" Dec 12 18:39:26.925120 kubelet[2702]: E1212 18:39:26.924486 2702 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 12 18:39:26.925120 kubelet[2702]: E1212 18:39:26.924522 2702 projected.go:194] Error preparing data for projected volume kube-api-access-2bjv4 for pod kube-system/kube-proxy-6k4px: configmap "kube-root-ca.crt" not found Dec 12 18:39:26.925120 kubelet[2702]: E1212 18:39:26.924611 2702 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a9d040ac-e2ca-4c1d-b1cc-9ef7613abad3-kube-api-access-2bjv4 podName:a9d040ac-e2ca-4c1d-b1cc-9ef7613abad3 nodeName:}" failed. No retries permitted until 2025-12-12 18:39:27.424582854 +0000 UTC m=+6.014246038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2bjv4" (UniqueName: "kubernetes.io/projected/a9d040ac-e2ca-4c1d-b1cc-9ef7613abad3-kube-api-access-2bjv4") pod "kube-proxy-6k4px" (UID: "a9d040ac-e2ca-4c1d-b1cc-9ef7613abad3") : configmap "kube-root-ca.crt" not found Dec 12 18:39:27.491575 systemd[1]: Created slice kubepods-besteffort-pod149f65c1_8a23_4947_95dd_4db0ef8609df.slice - libcontainer container kubepods-besteffort-pod149f65c1_8a23_4947_95dd_4db0ef8609df.slice. Dec 12 18:39:27.521452 kubelet[2702]: I1212 18:39:27.521278 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk5pd\" (UniqueName: \"kubernetes.io/projected/149f65c1-8a23-4947-95dd-4db0ef8609df-kube-api-access-tk5pd\") pod \"tigera-operator-7dcd859c48-thn2k\" (UID: \"149f65c1-8a23-4947-95dd-4db0ef8609df\") " pod="tigera-operator/tigera-operator-7dcd859c48-thn2k" Dec 12 18:39:27.521452 kubelet[2702]: I1212 18:39:27.521323 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/149f65c1-8a23-4947-95dd-4db0ef8609df-var-lib-calico\") pod \"tigera-operator-7dcd859c48-thn2k\" (UID: \"149f65c1-8a23-4947-95dd-4db0ef8609df\") " pod="tigera-operator/tigera-operator-7dcd859c48-thn2k" Dec 12 18:39:27.706065 kubelet[2702]: E1212 18:39:27.705683 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:27.707413 containerd[1527]: time="2025-12-12T18:39:27.707375369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6k4px,Uid:a9d040ac-e2ca-4c1d-b1cc-9ef7613abad3,Namespace:kube-system,Attempt:0,}" Dec 12 18:39:27.733023 containerd[1527]: time="2025-12-12T18:39:27.732959152Z" level=info msg="connecting to shim cabc40b9f548886794eb8b4bb4211acdd2ee7664b2439b3b3fda322d8032ec8c" address="unix:///run/containerd/s/245b10a37b233a574c716b54265dc34b8139be0465f4c5e308af6179fce85a1c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:27.766939 systemd[1]: Started cri-containerd-cabc40b9f548886794eb8b4bb4211acdd2ee7664b2439b3b3fda322d8032ec8c.scope - libcontainer container cabc40b9f548886794eb8b4bb4211acdd2ee7664b2439b3b3fda322d8032ec8c. Dec 12 18:39:27.798066 containerd[1527]: time="2025-12-12T18:39:27.797959637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-thn2k,Uid:149f65c1-8a23-4947-95dd-4db0ef8609df,Namespace:tigera-operator,Attempt:0,}" Dec 12 18:39:27.806134 kubelet[2702]: E1212 18:39:27.806077 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:27.815510 containerd[1527]: time="2025-12-12T18:39:27.815447661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6k4px,Uid:a9d040ac-e2ca-4c1d-b1cc-9ef7613abad3,Namespace:kube-system,Attempt:0,} returns sandbox id \"cabc40b9f548886794eb8b4bb4211acdd2ee7664b2439b3b3fda322d8032ec8c\"" Dec 12 18:39:27.817761 kubelet[2702]: E1212 18:39:27.817715 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:27.828079 containerd[1527]: time="2025-12-12T18:39:27.826087437Z" level=info msg="CreateContainer within sandbox \"cabc40b9f548886794eb8b4bb4211acdd2ee7664b2439b3b3fda322d8032ec8c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:39:27.848642 containerd[1527]: time="2025-12-12T18:39:27.848576138Z" level=info msg="Container 584cb90169a67a0dc41e4b764ccc5159602516811036fd4fe238551cc120830d: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:27.855256 containerd[1527]: time="2025-12-12T18:39:27.855198077Z" level=info msg="connecting to shim 486b854665f718a70d445f5e390ee97b55ab736afa86c2d150b85cca2649a6cd" address="unix:///run/containerd/s/9c624593ed9edb5bf2aeed1b61529e6f2a83307d6414a060851103826b507c9b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:27.858505 containerd[1527]: time="2025-12-12T18:39:27.858455396Z" level=info msg="CreateContainer within sandbox \"cabc40b9f548886794eb8b4bb4211acdd2ee7664b2439b3b3fda322d8032ec8c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"584cb90169a67a0dc41e4b764ccc5159602516811036fd4fe238551cc120830d\"" Dec 12 18:39:27.860117 containerd[1527]: time="2025-12-12T18:39:27.859234032Z" level=info msg="StartContainer for \"584cb90169a67a0dc41e4b764ccc5159602516811036fd4fe238551cc120830d\"" Dec 12 18:39:27.864360 containerd[1527]: time="2025-12-12T18:39:27.863679063Z" level=info msg="connecting to shim 584cb90169a67a0dc41e4b764ccc5159602516811036fd4fe238551cc120830d" address="unix:///run/containerd/s/245b10a37b233a574c716b54265dc34b8139be0465f4c5e308af6179fce85a1c" protocol=ttrpc version=3 Dec 12 18:39:27.899243 systemd[1]: Started cri-containerd-486b854665f718a70d445f5e390ee97b55ab736afa86c2d150b85cca2649a6cd.scope - libcontainer container 486b854665f718a70d445f5e390ee97b55ab736afa86c2d150b85cca2649a6cd. Dec 12 18:39:27.908564 systemd[1]: Started cri-containerd-584cb90169a67a0dc41e4b764ccc5159602516811036fd4fe238551cc120830d.scope - libcontainer container 584cb90169a67a0dc41e4b764ccc5159602516811036fd4fe238551cc120830d. Dec 12 18:39:27.972874 containerd[1527]: time="2025-12-12T18:39:27.972776779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-thn2k,Uid:149f65c1-8a23-4947-95dd-4db0ef8609df,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"486b854665f718a70d445f5e390ee97b55ab736afa86c2d150b85cca2649a6cd\"" Dec 12 18:39:27.979023 containerd[1527]: time="2025-12-12T18:39:27.978965919Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 18:39:27.982803 systemd-resolved[1390]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Dec 12 18:39:28.009328 containerd[1527]: time="2025-12-12T18:39:28.009291711Z" level=info msg="StartContainer for \"584cb90169a67a0dc41e4b764ccc5159602516811036fd4fe238551cc120830d\" returns successfully" Dec 12 18:39:28.644752 kubelet[2702]: E1212 18:39:28.643752 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:28.647396 kubelet[2702]: E1212 18:39:28.647362 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:28.659370 kubelet[2702]: I1212 18:39:28.659144 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6k4px" podStartSLOduration=2.65911746 podStartE2EDuration="2.65911746s" podCreationTimestamp="2025-12-12 18:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:39:28.658890782 +0000 UTC m=+7.248553965" watchObservedRunningTime="2025-12-12 18:39:28.65911746 +0000 UTC m=+7.248780642" Dec 12 18:39:29.649525 kubelet[2702]: E1212 18:39:29.649420 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:30.451410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount704381577.mount: Deactivated successfully. Dec 12 18:39:31.812049 containerd[1527]: time="2025-12-12T18:39:31.811585102Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:31.813167 containerd[1527]: time="2025-12-12T18:39:31.813138213Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 12 18:39:31.814050 containerd[1527]: time="2025-12-12T18:39:31.813834991Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:31.816052 containerd[1527]: time="2025-12-12T18:39:31.816003977Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:31.816938 containerd[1527]: time="2025-12-12T18:39:31.816524232Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.8375097s" Dec 12 18:39:31.817308 containerd[1527]: time="2025-12-12T18:39:31.817264886Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 12 18:39:31.819905 containerd[1527]: time="2025-12-12T18:39:31.819854461Z" level=info msg="CreateContainer within sandbox \"486b854665f718a70d445f5e390ee97b55ab736afa86c2d150b85cca2649a6cd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 18:39:31.830092 containerd[1527]: time="2025-12-12T18:39:31.829221814Z" level=info msg="Container 7c56a3f4164fd4b5df45ed5cddcb99d83470f008c7b5f54343c3c6fd18b918ff: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:31.833274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3336714430.mount: Deactivated successfully. Dec 12 18:39:31.846608 containerd[1527]: time="2025-12-12T18:39:31.846551971Z" level=info msg="CreateContainer within sandbox \"486b854665f718a70d445f5e390ee97b55ab736afa86c2d150b85cca2649a6cd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7c56a3f4164fd4b5df45ed5cddcb99d83470f008c7b5f54343c3c6fd18b918ff\"" Dec 12 18:39:31.847845 containerd[1527]: time="2025-12-12T18:39:31.847812317Z" level=info msg="StartContainer for \"7c56a3f4164fd4b5df45ed5cddcb99d83470f008c7b5f54343c3c6fd18b918ff\"" Dec 12 18:39:31.849569 containerd[1527]: time="2025-12-12T18:39:31.849540754Z" level=info msg="connecting to shim 7c56a3f4164fd4b5df45ed5cddcb99d83470f008c7b5f54343c3c6fd18b918ff" address="unix:///run/containerd/s/9c624593ed9edb5bf2aeed1b61529e6f2a83307d6414a060851103826b507c9b" protocol=ttrpc version=3 Dec 12 18:39:31.878374 systemd[1]: Started cri-containerd-7c56a3f4164fd4b5df45ed5cddcb99d83470f008c7b5f54343c3c6fd18b918ff.scope - libcontainer container 7c56a3f4164fd4b5df45ed5cddcb99d83470f008c7b5f54343c3c6fd18b918ff. Dec 12 18:39:31.920046 containerd[1527]: time="2025-12-12T18:39:31.919705759Z" level=info msg="StartContainer for \"7c56a3f4164fd4b5df45ed5cddcb99d83470f008c7b5f54343c3c6fd18b918ff\" returns successfully" Dec 12 18:39:33.280685 kubelet[2702]: E1212 18:39:33.280443 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:33.303886 kubelet[2702]: I1212 18:39:33.303811 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-thn2k" podStartSLOduration=2.462641049 podStartE2EDuration="6.303785586s" podCreationTimestamp="2025-12-12 18:39:27 +0000 UTC" firstStartedPulling="2025-12-12 18:39:27.977118082 +0000 UTC m=+6.566781263" lastFinishedPulling="2025-12-12 18:39:31.818262624 +0000 UTC m=+10.407925800" observedRunningTime="2025-12-12 18:39:32.673891395 +0000 UTC m=+11.263554579" watchObservedRunningTime="2025-12-12 18:39:33.303785586 +0000 UTC m=+11.893448770" Dec 12 18:39:33.664950 kubelet[2702]: E1212 18:39:33.663702 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:34.161388 update_engine[1511]: I20251212 18:39:34.161298 1511 update_attempter.cc:509] Updating boot flags... Dec 12 18:39:38.936199 sudo[1775]: pam_unix(sudo:session): session closed for user root Dec 12 18:39:38.941352 sshd[1774]: Connection closed by 147.75.109.163 port 46302 Dec 12 18:39:38.942624 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Dec 12 18:39:38.951672 systemd[1]: sshd@6-143.198.226.225:22-147.75.109.163:46302.service: Deactivated successfully. Dec 12 18:39:38.957844 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:39:38.958186 systemd[1]: session-7.scope: Consumed 5.367s CPU time, 164.9M memory peak. Dec 12 18:39:38.961189 systemd-logind[1510]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:39:38.965062 systemd-logind[1510]: Removed session 7. Dec 12 18:39:46.760246 systemd[1]: Created slice kubepods-besteffort-pod1c51a4fc_2be4_4e30_bd18_66a6642244d5.slice - libcontainer container kubepods-besteffort-pod1c51a4fc_2be4_4e30_bd18_66a6642244d5.slice. Dec 12 18:39:46.767468 kubelet[2702]: W1212 18:39:46.767284 2702 reflector.go:569] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4459.2.2-f-e155308a0b" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4459.2.2-f-e155308a0b' and this object Dec 12 18:39:46.767468 kubelet[2702]: W1212 18:39:46.767284 2702 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4459.2.2-f-e155308a0b" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4459.2.2-f-e155308a0b' and this object Dec 12 18:39:46.770257 kubelet[2702]: E1212 18:39:46.770214 2702 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ci-4459.2.2-f-e155308a0b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459.2.2-f-e155308a0b' and this object" logger="UnhandledError" Dec 12 18:39:46.770578 kubelet[2702]: I1212 18:39:46.770369 2702 status_manager.go:890] "Failed to get status for pod" podUID="1c51a4fc-2be4-4e30-bd18-66a6642244d5" pod="calico-system/calico-typha-fbb485855-pz7pn" err="pods \"calico-typha-fbb485855-pz7pn\" is forbidden: User \"system:node:ci-4459.2.2-f-e155308a0b\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459.2.2-f-e155308a0b' and this object" Dec 12 18:39:46.770578 kubelet[2702]: E1212 18:39:46.770172 2702 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4459.2.2-f-e155308a0b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459.2.2-f-e155308a0b' and this object" logger="UnhandledError" Dec 12 18:39:46.852352 kubelet[2702]: I1212 18:39:46.852235 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c51a4fc-2be4-4e30-bd18-66a6642244d5-tigera-ca-bundle\") pod \"calico-typha-fbb485855-pz7pn\" (UID: \"1c51a4fc-2be4-4e30-bd18-66a6642244d5\") " pod="calico-system/calico-typha-fbb485855-pz7pn" Dec 12 18:39:46.852352 kubelet[2702]: I1212 18:39:46.852299 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1c51a4fc-2be4-4e30-bd18-66a6642244d5-typha-certs\") pod \"calico-typha-fbb485855-pz7pn\" (UID: \"1c51a4fc-2be4-4e30-bd18-66a6642244d5\") " pod="calico-system/calico-typha-fbb485855-pz7pn" Dec 12 18:39:46.852352 kubelet[2702]: I1212 18:39:46.852322 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tztpp\" (UniqueName: \"kubernetes.io/projected/1c51a4fc-2be4-4e30-bd18-66a6642244d5-kube-api-access-tztpp\") pod \"calico-typha-fbb485855-pz7pn\" (UID: \"1c51a4fc-2be4-4e30-bd18-66a6642244d5\") " pod="calico-system/calico-typha-fbb485855-pz7pn" Dec 12 18:39:47.005796 systemd[1]: Created slice kubepods-besteffort-podd778eb31_bb18_40d8_ae3d_c16e7c0037c7.slice - libcontainer container kubepods-besteffort-podd778eb31_bb18_40d8_ae3d_c16e7c0037c7.slice. Dec 12 18:39:47.055133 kubelet[2702]: I1212 18:39:47.054800 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d778eb31-bb18-40d8-ae3d-c16e7c0037c7-policysync\") pod \"calico-node-gghq2\" (UID: \"d778eb31-bb18-40d8-ae3d-c16e7c0037c7\") " pod="calico-system/calico-node-gghq2" Dec 12 18:39:47.055133 kubelet[2702]: I1212 18:39:47.054867 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d778eb31-bb18-40d8-ae3d-c16e7c0037c7-var-run-calico\") pod \"calico-node-gghq2\" (UID: \"d778eb31-bb18-40d8-ae3d-c16e7c0037c7\") " pod="calico-system/calico-node-gghq2" Dec 12 18:39:47.055133 kubelet[2702]: I1212 18:39:47.054892 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d778eb31-bb18-40d8-ae3d-c16e7c0037c7-xtables-lock\") pod \"calico-node-gghq2\" (UID: \"d778eb31-bb18-40d8-ae3d-c16e7c0037c7\") " pod="calico-system/calico-node-gghq2" Dec 12 18:39:47.055133 kubelet[2702]: I1212 18:39:47.054922 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d778eb31-bb18-40d8-ae3d-c16e7c0037c7-node-certs\") pod \"calico-node-gghq2\" (UID: \"d778eb31-bb18-40d8-ae3d-c16e7c0037c7\") " pod="calico-system/calico-node-gghq2" Dec 12 18:39:47.055133 kubelet[2702]: I1212 18:39:47.054948 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d778eb31-bb18-40d8-ae3d-c16e7c0037c7-tigera-ca-bundle\") pod \"calico-node-gghq2\" (UID: \"d778eb31-bb18-40d8-ae3d-c16e7c0037c7\") " pod="calico-system/calico-node-gghq2" Dec 12 18:39:47.055606 kubelet[2702]: I1212 18:39:47.054995 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d778eb31-bb18-40d8-ae3d-c16e7c0037c7-lib-modules\") pod \"calico-node-gghq2\" (UID: \"d778eb31-bb18-40d8-ae3d-c16e7c0037c7\") " pod="calico-system/calico-node-gghq2" Dec 12 18:39:47.055606 kubelet[2702]: I1212 18:39:47.055053 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d778eb31-bb18-40d8-ae3d-c16e7c0037c7-flexvol-driver-host\") pod \"calico-node-gghq2\" (UID: \"d778eb31-bb18-40d8-ae3d-c16e7c0037c7\") " pod="calico-system/calico-node-gghq2" Dec 12 18:39:47.055606 kubelet[2702]: I1212 18:39:47.055091 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d778eb31-bb18-40d8-ae3d-c16e7c0037c7-cni-log-dir\") pod \"calico-node-gghq2\" (UID: \"d778eb31-bb18-40d8-ae3d-c16e7c0037c7\") " pod="calico-system/calico-node-gghq2" Dec 12 18:39:47.055606 kubelet[2702]: I1212 18:39:47.055112 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d778eb31-bb18-40d8-ae3d-c16e7c0037c7-var-lib-calico\") pod \"calico-node-gghq2\" (UID: \"d778eb31-bb18-40d8-ae3d-c16e7c0037c7\") " pod="calico-system/calico-node-gghq2" Dec 12 18:39:47.055606 kubelet[2702]: I1212 18:39:47.055134 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf4pd\" (UniqueName: \"kubernetes.io/projected/d778eb31-bb18-40d8-ae3d-c16e7c0037c7-kube-api-access-gf4pd\") pod \"calico-node-gghq2\" (UID: \"d778eb31-bb18-40d8-ae3d-c16e7c0037c7\") " pod="calico-system/calico-node-gghq2" Dec 12 18:39:47.055820 kubelet[2702]: I1212 18:39:47.055159 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d778eb31-bb18-40d8-ae3d-c16e7c0037c7-cni-net-dir\") pod \"calico-node-gghq2\" (UID: \"d778eb31-bb18-40d8-ae3d-c16e7c0037c7\") " pod="calico-system/calico-node-gghq2" Dec 12 18:39:47.055820 kubelet[2702]: I1212 18:39:47.055181 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d778eb31-bb18-40d8-ae3d-c16e7c0037c7-cni-bin-dir\") pod \"calico-node-gghq2\" (UID: \"d778eb31-bb18-40d8-ae3d-c16e7c0037c7\") " pod="calico-system/calico-node-gghq2" Dec 12 18:39:47.165120 kubelet[2702]: E1212 18:39:47.165079 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.168194 kubelet[2702]: W1212 18:39:47.168071 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.169022 kubelet[2702]: E1212 18:39:47.168952 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.188072 kubelet[2702]: E1212 18:39:47.187082 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2flfr" podUID="efd42e2e-3a4f-425a-9c07-184c94bcbd7e" Dec 12 18:39:47.232674 kubelet[2702]: E1212 18:39:47.232624 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.232674 kubelet[2702]: W1212 18:39:47.232657 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.232935 kubelet[2702]: E1212 18:39:47.232683 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.233120 kubelet[2702]: E1212 18:39:47.233100 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.233120 kubelet[2702]: W1212 18:39:47.233117 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.233216 kubelet[2702]: E1212 18:39:47.233133 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.233354 kubelet[2702]: E1212 18:39:47.233338 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.233354 kubelet[2702]: W1212 18:39:47.233349 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.233448 kubelet[2702]: E1212 18:39:47.233359 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.233624 kubelet[2702]: E1212 18:39:47.233611 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.233624 kubelet[2702]: W1212 18:39:47.233620 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.233706 kubelet[2702]: E1212 18:39:47.233631 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.233804 kubelet[2702]: E1212 18:39:47.233790 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.233804 kubelet[2702]: W1212 18:39:47.233801 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.233859 kubelet[2702]: E1212 18:39:47.233808 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.234014 kubelet[2702]: E1212 18:39:47.233976 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.234014 kubelet[2702]: W1212 18:39:47.233993 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.234101 kubelet[2702]: E1212 18:39:47.234005 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.235206 kubelet[2702]: E1212 18:39:47.234727 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.235206 kubelet[2702]: W1212 18:39:47.234754 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.235206 kubelet[2702]: E1212 18:39:47.234768 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.235206 kubelet[2702]: E1212 18:39:47.234993 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.235206 kubelet[2702]: W1212 18:39:47.235002 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.235206 kubelet[2702]: E1212 18:39:47.235012 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.235639 kubelet[2702]: E1212 18:39:47.235614 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.235639 kubelet[2702]: W1212 18:39:47.235633 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.235717 kubelet[2702]: E1212 18:39:47.235650 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.236168 kubelet[2702]: E1212 18:39:47.236148 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.236404 kubelet[2702]: W1212 18:39:47.236266 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.236404 kubelet[2702]: E1212 18:39:47.236296 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.236681 kubelet[2702]: E1212 18:39:47.236663 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.236681 kubelet[2702]: W1212 18:39:47.236679 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.237051 kubelet[2702]: E1212 18:39:47.236694 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.237182 kubelet[2702]: E1212 18:39:47.237084 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.237182 kubelet[2702]: W1212 18:39:47.237098 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.237182 kubelet[2702]: E1212 18:39:47.237109 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.237872 kubelet[2702]: E1212 18:39:47.237792 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.237872 kubelet[2702]: W1212 18:39:47.237813 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.237872 kubelet[2702]: E1212 18:39:47.237824 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.238351 kubelet[2702]: E1212 18:39:47.237996 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.238351 kubelet[2702]: W1212 18:39:47.238007 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.238351 kubelet[2702]: E1212 18:39:47.238015 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.238351 kubelet[2702]: E1212 18:39:47.238193 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.238351 kubelet[2702]: W1212 18:39:47.238200 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.238351 kubelet[2702]: E1212 18:39:47.238207 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.238521 kubelet[2702]: E1212 18:39:47.238447 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.238521 kubelet[2702]: W1212 18:39:47.238457 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.238521 kubelet[2702]: E1212 18:39:47.238466 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.238878 kubelet[2702]: E1212 18:39:47.238855 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.238878 kubelet[2702]: W1212 18:39:47.238873 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.238979 kubelet[2702]: E1212 18:39:47.238887 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.239626 kubelet[2702]: E1212 18:39:47.239608 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.239626 kubelet[2702]: W1212 18:39:47.239623 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.239733 kubelet[2702]: E1212 18:39:47.239637 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.239971 kubelet[2702]: E1212 18:39:47.239954 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.239971 kubelet[2702]: W1212 18:39:47.239969 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.240058 kubelet[2702]: E1212 18:39:47.239980 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.240224 kubelet[2702]: E1212 18:39:47.240203 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.240262 kubelet[2702]: W1212 18:39:47.240228 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.240262 kubelet[2702]: E1212 18:39:47.240239 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.256456 kubelet[2702]: E1212 18:39:47.256420 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.256456 kubelet[2702]: W1212 18:39:47.256447 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.257145 kubelet[2702]: E1212 18:39:47.256474 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.257145 kubelet[2702]: I1212 18:39:47.256509 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/efd42e2e-3a4f-425a-9c07-184c94bcbd7e-varrun\") pod \"csi-node-driver-2flfr\" (UID: \"efd42e2e-3a4f-425a-9c07-184c94bcbd7e\") " pod="calico-system/csi-node-driver-2flfr" Dec 12 18:39:47.257248 kubelet[2702]: E1212 18:39:47.257177 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.257248 kubelet[2702]: W1212 18:39:47.257196 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.257248 kubelet[2702]: E1212 18:39:47.257220 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.257248 kubelet[2702]: I1212 18:39:47.257242 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/efd42e2e-3a4f-425a-9c07-184c94bcbd7e-registration-dir\") pod \"csi-node-driver-2flfr\" (UID: \"efd42e2e-3a4f-425a-9c07-184c94bcbd7e\") " pod="calico-system/csi-node-driver-2flfr" Dec 12 18:39:47.257451 kubelet[2702]: E1212 18:39:47.257437 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.257451 kubelet[2702]: W1212 18:39:47.257448 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.257597 kubelet[2702]: E1212 18:39:47.257458 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.257597 kubelet[2702]: I1212 18:39:47.257548 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw94t\" (UniqueName: \"kubernetes.io/projected/efd42e2e-3a4f-425a-9c07-184c94bcbd7e-kube-api-access-lw94t\") pod \"csi-node-driver-2flfr\" (UID: \"efd42e2e-3a4f-425a-9c07-184c94bcbd7e\") " pod="calico-system/csi-node-driver-2flfr" Dec 12 18:39:47.257694 kubelet[2702]: E1212 18:39:47.257614 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.257694 kubelet[2702]: W1212 18:39:47.257620 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.257694 kubelet[2702]: E1212 18:39:47.257633 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.257868 kubelet[2702]: E1212 18:39:47.257784 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.257868 kubelet[2702]: W1212 18:39:47.257790 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.257868 kubelet[2702]: E1212 18:39:47.257798 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.258080 kubelet[2702]: E1212 18:39:47.258062 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.258197 kubelet[2702]: W1212 18:39:47.258080 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.258197 kubelet[2702]: E1212 18:39:47.258110 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.258364 kubelet[2702]: E1212 18:39:47.258350 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.258364 kubelet[2702]: W1212 18:39:47.258363 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.258423 kubelet[2702]: E1212 18:39:47.258385 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.258552 kubelet[2702]: E1212 18:39:47.258536 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.258552 kubelet[2702]: W1212 18:39:47.258546 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.258605 kubelet[2702]: E1212 18:39:47.258567 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.258605 kubelet[2702]: I1212 18:39:47.258592 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/efd42e2e-3a4f-425a-9c07-184c94bcbd7e-kubelet-dir\") pod \"csi-node-driver-2flfr\" (UID: \"efd42e2e-3a4f-425a-9c07-184c94bcbd7e\") " pod="calico-system/csi-node-driver-2flfr" Dec 12 18:39:47.258802 kubelet[2702]: E1212 18:39:47.258786 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.258802 kubelet[2702]: W1212 18:39:47.258802 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.258865 kubelet[2702]: E1212 18:39:47.258816 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.258977 kubelet[2702]: E1212 18:39:47.258958 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.258977 kubelet[2702]: W1212 18:39:47.258967 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.258977 kubelet[2702]: E1212 18:39:47.258975 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.259611 kubelet[2702]: E1212 18:39:47.259154 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.259611 kubelet[2702]: W1212 18:39:47.259162 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.259611 kubelet[2702]: E1212 18:39:47.259171 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.259611 kubelet[2702]: I1212 18:39:47.259186 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/efd42e2e-3a4f-425a-9c07-184c94bcbd7e-socket-dir\") pod \"csi-node-driver-2flfr\" (UID: \"efd42e2e-3a4f-425a-9c07-184c94bcbd7e\") " pod="calico-system/csi-node-driver-2flfr" Dec 12 18:39:47.259611 kubelet[2702]: E1212 18:39:47.259350 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.259611 kubelet[2702]: W1212 18:39:47.259357 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.259611 kubelet[2702]: E1212 18:39:47.259364 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.259611 kubelet[2702]: E1212 18:39:47.259488 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.259611 kubelet[2702]: W1212 18:39:47.259494 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.259872 kubelet[2702]: E1212 18:39:47.259501 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.259872 kubelet[2702]: E1212 18:39:47.259640 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.259872 kubelet[2702]: W1212 18:39:47.259647 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.259872 kubelet[2702]: E1212 18:39:47.259653 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.259872 kubelet[2702]: E1212 18:39:47.259769 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.259872 kubelet[2702]: W1212 18:39:47.259775 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.259872 kubelet[2702]: E1212 18:39:47.259782 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.360705 kubelet[2702]: E1212 18:39:47.360551 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.360705 kubelet[2702]: W1212 18:39:47.360579 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.360705 kubelet[2702]: E1212 18:39:47.360603 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.362080 kubelet[2702]: E1212 18:39:47.361876 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.362080 kubelet[2702]: W1212 18:39:47.361896 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.362080 kubelet[2702]: E1212 18:39:47.361921 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.362553 kubelet[2702]: E1212 18:39:47.362519 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.362553 kubelet[2702]: W1212 18:39:47.362535 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.362553 kubelet[2702]: E1212 18:39:47.362553 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.363102 kubelet[2702]: E1212 18:39:47.363074 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.363102 kubelet[2702]: W1212 18:39:47.363091 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.363322 kubelet[2702]: E1212 18:39:47.363303 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.363661 kubelet[2702]: E1212 18:39:47.363631 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.363661 kubelet[2702]: W1212 18:39:47.363644 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.363777 kubelet[2702]: E1212 18:39:47.363668 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.364167 kubelet[2702]: E1212 18:39:47.364148 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.364167 kubelet[2702]: W1212 18:39:47.364163 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.364670 kubelet[2702]: E1212 18:39:47.364303 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.365000 kubelet[2702]: E1212 18:39:47.364978 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.365000 kubelet[2702]: W1212 18:39:47.364996 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.365162 kubelet[2702]: E1212 18:39:47.365016 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.365915 kubelet[2702]: E1212 18:39:47.365880 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.365915 kubelet[2702]: W1212 18:39:47.365909 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.366190 kubelet[2702]: E1212 18:39:47.365926 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.366534 kubelet[2702]: E1212 18:39:47.366516 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.366534 kubelet[2702]: W1212 18:39:47.366530 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.366744 kubelet[2702]: E1212 18:39:47.366725 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.367927 kubelet[2702]: E1212 18:39:47.367906 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.367927 kubelet[2702]: W1212 18:39:47.367920 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.368150 kubelet[2702]: E1212 18:39:47.368002 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.368226 kubelet[2702]: E1212 18:39:47.368188 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.368226 kubelet[2702]: W1212 18:39:47.368196 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.368433 kubelet[2702]: E1212 18:39:47.368270 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.368607 kubelet[2702]: E1212 18:39:47.368585 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.368607 kubelet[2702]: W1212 18:39:47.368598 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.369156 kubelet[2702]: E1212 18:39:47.369082 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.369256 kubelet[2702]: E1212 18:39:47.369156 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.369256 kubelet[2702]: W1212 18:39:47.369170 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.369440 kubelet[2702]: E1212 18:39:47.369299 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.369440 kubelet[2702]: E1212 18:39:47.369419 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.369440 kubelet[2702]: W1212 18:39:47.369430 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.369558 kubelet[2702]: E1212 18:39:47.369547 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.369810 kubelet[2702]: E1212 18:39:47.369791 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.369810 kubelet[2702]: W1212 18:39:47.369804 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.369945 kubelet[2702]: E1212 18:39:47.369819 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.370619 kubelet[2702]: E1212 18:39:47.369994 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.370619 kubelet[2702]: W1212 18:39:47.370005 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.370619 kubelet[2702]: E1212 18:39:47.370014 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.370619 kubelet[2702]: E1212 18:39:47.370193 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.370619 kubelet[2702]: W1212 18:39:47.370214 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.370619 kubelet[2702]: E1212 18:39:47.370223 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.370619 kubelet[2702]: E1212 18:39:47.370612 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.370619 kubelet[2702]: W1212 18:39:47.370622 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.371146 kubelet[2702]: E1212 18:39:47.370633 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.371146 kubelet[2702]: E1212 18:39:47.370891 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.371146 kubelet[2702]: W1212 18:39:47.370900 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.371146 kubelet[2702]: E1212 18:39:47.370926 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.371311 kubelet[2702]: E1212 18:39:47.371263 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.371311 kubelet[2702]: W1212 18:39:47.371273 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.371395 kubelet[2702]: E1212 18:39:47.371291 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.372057 kubelet[2702]: E1212 18:39:47.371550 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.372057 kubelet[2702]: W1212 18:39:47.371562 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.372057 kubelet[2702]: E1212 18:39:47.371590 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.372057 kubelet[2702]: E1212 18:39:47.371774 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.372057 kubelet[2702]: W1212 18:39:47.371782 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.372057 kubelet[2702]: E1212 18:39:47.371793 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.372057 kubelet[2702]: E1212 18:39:47.372007 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.372057 kubelet[2702]: W1212 18:39:47.372016 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.372447 kubelet[2702]: E1212 18:39:47.372143 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.372447 kubelet[2702]: E1212 18:39:47.372291 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.372447 kubelet[2702]: W1212 18:39:47.372298 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.372447 kubelet[2702]: E1212 18:39:47.372310 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.372618 kubelet[2702]: E1212 18:39:47.372522 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.372618 kubelet[2702]: W1212 18:39:47.372529 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.372618 kubelet[2702]: E1212 18:39:47.372576 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.605918 kubelet[2702]: E1212 18:39:47.605825 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.605918 kubelet[2702]: W1212 18:39:47.605849 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.605918 kubelet[2702]: E1212 18:39:47.605871 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.608382 kubelet[2702]: E1212 18:39:47.608348 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.608382 kubelet[2702]: W1212 18:39:47.608378 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.608578 kubelet[2702]: E1212 18:39:47.608406 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:47.970376 kubelet[2702]: E1212 18:39:47.970333 2702 projected.go:288] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 18:39:47.970376 kubelet[2702]: E1212 18:39:47.970384 2702 projected.go:194] Error preparing data for projected volume kube-api-access-tztpp for pod calico-system/calico-typha-fbb485855-pz7pn: failed to sync configmap cache: timed out waiting for the condition Dec 12 18:39:47.971663 kubelet[2702]: E1212 18:39:47.970578 2702 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c51a4fc-2be4-4e30-bd18-66a6642244d5-kube-api-access-tztpp podName:1c51a4fc-2be4-4e30-bd18-66a6642244d5 nodeName:}" failed. No retries permitted until 2025-12-12 18:39:48.470453117 +0000 UTC m=+27.060116303 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tztpp" (UniqueName: "kubernetes.io/projected/1c51a4fc-2be4-4e30-bd18-66a6642244d5-kube-api-access-tztpp") pod "calico-typha-fbb485855-pz7pn" (UID: "1c51a4fc-2be4-4e30-bd18-66a6642244d5") : failed to sync configmap cache: timed out waiting for the condition Dec 12 18:39:47.975150 kubelet[2702]: E1212 18:39:47.975045 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:47.975150 kubelet[2702]: W1212 18:39:47.975075 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:47.975150 kubelet[2702]: E1212 18:39:47.975098 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.077637 kubelet[2702]: E1212 18:39:48.077590 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.077637 kubelet[2702]: W1212 18:39:48.077633 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.077832 kubelet[2702]: E1212 18:39:48.077674 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.178614 kubelet[2702]: E1212 18:39:48.178549 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.178614 kubelet[2702]: W1212 18:39:48.178581 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.178614 kubelet[2702]: E1212 18:39:48.178608 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.180870 kubelet[2702]: E1212 18:39:48.180803 2702 projected.go:288] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 18:39:48.180870 kubelet[2702]: E1212 18:39:48.180862 2702 projected.go:194] Error preparing data for projected volume kube-api-access-gf4pd for pod calico-system/calico-node-gghq2: failed to sync configmap cache: timed out waiting for the condition Dec 12 18:39:48.181666 kubelet[2702]: E1212 18:39:48.181603 2702 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d778eb31-bb18-40d8-ae3d-c16e7c0037c7-kube-api-access-gf4pd podName:d778eb31-bb18-40d8-ae3d-c16e7c0037c7 nodeName:}" failed. No retries permitted until 2025-12-12 18:39:48.680934386 +0000 UTC m=+27.270597563 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gf4pd" (UniqueName: "kubernetes.io/projected/d778eb31-bb18-40d8-ae3d-c16e7c0037c7-kube-api-access-gf4pd") pod "calico-node-gghq2" (UID: "d778eb31-bb18-40d8-ae3d-c16e7c0037c7") : failed to sync configmap cache: timed out waiting for the condition Dec 12 18:39:48.280053 kubelet[2702]: E1212 18:39:48.279964 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.280053 kubelet[2702]: W1212 18:39:48.279997 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.280576 kubelet[2702]: E1212 18:39:48.280316 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.280929 kubelet[2702]: E1212 18:39:48.280827 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.280929 kubelet[2702]: W1212 18:39:48.280847 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.280929 kubelet[2702]: E1212 18:39:48.280869 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.367561 kubelet[2702]: E1212 18:39:48.367518 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.367561 kubelet[2702]: W1212 18:39:48.367554 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.367775 kubelet[2702]: E1212 18:39:48.367592 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.382665 kubelet[2702]: E1212 18:39:48.382406 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.382665 kubelet[2702]: W1212 18:39:48.382440 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.382665 kubelet[2702]: E1212 18:39:48.382471 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.384173 kubelet[2702]: E1212 18:39:48.383946 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.384173 kubelet[2702]: W1212 18:39:48.383971 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.384173 kubelet[2702]: E1212 18:39:48.384000 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.485836 kubelet[2702]: E1212 18:39:48.485671 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.485836 kubelet[2702]: W1212 18:39:48.485706 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.485836 kubelet[2702]: E1212 18:39:48.485737 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.486271 kubelet[2702]: E1212 18:39:48.486071 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.486271 kubelet[2702]: W1212 18:39:48.486084 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.486271 kubelet[2702]: E1212 18:39:48.486108 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.486439 kubelet[2702]: E1212 18:39:48.486339 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.486439 kubelet[2702]: W1212 18:39:48.486349 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.486439 kubelet[2702]: E1212 18:39:48.486361 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.486570 kubelet[2702]: E1212 18:39:48.486552 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.486570 kubelet[2702]: W1212 18:39:48.486562 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.486692 kubelet[2702]: E1212 18:39:48.486575 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.486767 kubelet[2702]: E1212 18:39:48.486751 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.486767 kubelet[2702]: W1212 18:39:48.486763 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.486867 kubelet[2702]: E1212 18:39:48.486775 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.487018 kubelet[2702]: E1212 18:39:48.486989 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.487018 kubelet[2702]: W1212 18:39:48.487015 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.487198 kubelet[2702]: E1212 18:39:48.487073 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.494899 kubelet[2702]: E1212 18:39:48.494769 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.494899 kubelet[2702]: W1212 18:39:48.494805 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.494899 kubelet[2702]: E1212 18:39:48.494838 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.565965 kubelet[2702]: E1212 18:39:48.565413 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:48.566957 containerd[1527]: time="2025-12-12T18:39:48.566766340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fbb485855-pz7pn,Uid:1c51a4fc-2be4-4e30-bd18-66a6642244d5,Namespace:calico-system,Attempt:0,}" Dec 12 18:39:48.588207 kubelet[2702]: E1212 18:39:48.588131 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.588207 kubelet[2702]: W1212 18:39:48.588199 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.588403 kubelet[2702]: E1212 18:39:48.588264 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.593268 kubelet[2702]: E1212 18:39:48.589818 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2flfr" podUID="efd42e2e-3a4f-425a-9c07-184c94bcbd7e" Dec 12 18:39:48.597216 containerd[1527]: time="2025-12-12T18:39:48.596420919Z" level=info msg="connecting to shim 87dff90d28cb6e91d8fa08a1f45c9c9c45f7c5497d00920ec8a0fd5f04d6662c" address="unix:///run/containerd/s/41f5ad75df02151654b78a8fc7f8744f9bf6b227b24cac27ef89e27c0df879dc" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:48.646338 systemd[1]: Started cri-containerd-87dff90d28cb6e91d8fa08a1f45c9c9c45f7c5497d00920ec8a0fd5f04d6662c.scope - libcontainer container 87dff90d28cb6e91d8fa08a1f45c9c9c45f7c5497d00920ec8a0fd5f04d6662c. Dec 12 18:39:48.690337 kubelet[2702]: E1212 18:39:48.690284 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.690337 kubelet[2702]: W1212 18:39:48.690314 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.690337 kubelet[2702]: E1212 18:39:48.690345 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.691348 kubelet[2702]: E1212 18:39:48.691129 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.691348 kubelet[2702]: W1212 18:39:48.691161 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.691348 kubelet[2702]: E1212 18:39:48.691188 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.691787 kubelet[2702]: E1212 18:39:48.691674 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.691787 kubelet[2702]: W1212 18:39:48.691695 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.691787 kubelet[2702]: E1212 18:39:48.691715 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.692442 kubelet[2702]: E1212 18:39:48.692249 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.692442 kubelet[2702]: W1212 18:39:48.692271 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.692442 kubelet[2702]: E1212 18:39:48.692291 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.692913 kubelet[2702]: E1212 18:39:48.692892 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.693071 kubelet[2702]: W1212 18:39:48.693015 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.693688 kubelet[2702]: E1212 18:39:48.693227 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.702518 kubelet[2702]: E1212 18:39:48.702476 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:48.704066 kubelet[2702]: W1212 18:39:48.703112 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:48.704315 kubelet[2702]: E1212 18:39:48.704282 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:48.732974 containerd[1527]: time="2025-12-12T18:39:48.732898199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fbb485855-pz7pn,Uid:1c51a4fc-2be4-4e30-bd18-66a6642244d5,Namespace:calico-system,Attempt:0,} returns sandbox id \"87dff90d28cb6e91d8fa08a1f45c9c9c45f7c5497d00920ec8a0fd5f04d6662c\"" Dec 12 18:39:48.734062 kubelet[2702]: E1212 18:39:48.733946 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:48.737092 containerd[1527]: time="2025-12-12T18:39:48.737056219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 18:39:48.810580 kubelet[2702]: E1212 18:39:48.810285 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:48.811634 containerd[1527]: time="2025-12-12T18:39:48.811230076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gghq2,Uid:d778eb31-bb18-40d8-ae3d-c16e7c0037c7,Namespace:calico-system,Attempt:0,}" Dec 12 18:39:48.840768 containerd[1527]: time="2025-12-12T18:39:48.839509181Z" level=info msg="connecting to shim b18e3921a8a1a88a09d84a392d90c662d21accd20ab5a736fd4e1bcd28d04f4a" address="unix:///run/containerd/s/640495d58fce1ed7f13a6110be4f9cf6bd01099f0536a16d8590dcc82c2625ab" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:48.879345 systemd[1]: Started cri-containerd-b18e3921a8a1a88a09d84a392d90c662d21accd20ab5a736fd4e1bcd28d04f4a.scope - libcontainer container b18e3921a8a1a88a09d84a392d90c662d21accd20ab5a736fd4e1bcd28d04f4a. Dec 12 18:39:48.933706 containerd[1527]: time="2025-12-12T18:39:48.933652038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gghq2,Uid:d778eb31-bb18-40d8-ae3d-c16e7c0037c7,Namespace:calico-system,Attempt:0,} returns sandbox id \"b18e3921a8a1a88a09d84a392d90c662d21accd20ab5a736fd4e1bcd28d04f4a\"" Dec 12 18:39:48.935358 kubelet[2702]: E1212 18:39:48.935296 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:50.039215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3669373107.mount: Deactivated successfully. Dec 12 18:39:50.590727 kubelet[2702]: E1212 18:39:50.590626 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2flfr" podUID="efd42e2e-3a4f-425a-9c07-184c94bcbd7e" Dec 12 18:39:51.108060 containerd[1527]: time="2025-12-12T18:39:51.107971260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:51.159052 containerd[1527]: time="2025-12-12T18:39:51.109382389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Dec 12 18:39:51.159266 containerd[1527]: time="2025-12-12T18:39:51.113773748Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:51.159266 containerd[1527]: time="2025-12-12T18:39:51.116633541Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.379545039s" Dec 12 18:39:51.159266 containerd[1527]: time="2025-12-12T18:39:51.159220601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 12 18:39:51.160684 containerd[1527]: time="2025-12-12T18:39:51.160630175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:51.165405 containerd[1527]: time="2025-12-12T18:39:51.165085994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 18:39:51.194437 containerd[1527]: time="2025-12-12T18:39:51.194384994Z" level=info msg="CreateContainer within sandbox \"87dff90d28cb6e91d8fa08a1f45c9c9c45f7c5497d00920ec8a0fd5f04d6662c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 18:39:51.202412 containerd[1527]: time="2025-12-12T18:39:51.202358031Z" level=info msg="Container d91b9e14799791a926662033edc9d70f2ce719efa0b60c302e0a0392228ec051: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:51.211866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2754866041.mount: Deactivated successfully. Dec 12 18:39:51.219178 containerd[1527]: time="2025-12-12T18:39:51.219140326Z" level=info msg="CreateContainer within sandbox \"87dff90d28cb6e91d8fa08a1f45c9c9c45f7c5497d00920ec8a0fd5f04d6662c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d91b9e14799791a926662033edc9d70f2ce719efa0b60c302e0a0392228ec051\"" Dec 12 18:39:51.221701 containerd[1527]: time="2025-12-12T18:39:51.221657359Z" level=info msg="StartContainer for \"d91b9e14799791a926662033edc9d70f2ce719efa0b60c302e0a0392228ec051\"" Dec 12 18:39:51.225213 containerd[1527]: time="2025-12-12T18:39:51.225133222Z" level=info msg="connecting to shim d91b9e14799791a926662033edc9d70f2ce719efa0b60c302e0a0392228ec051" address="unix:///run/containerd/s/41f5ad75df02151654b78a8fc7f8744f9bf6b227b24cac27ef89e27c0df879dc" protocol=ttrpc version=3 Dec 12 18:39:51.256401 systemd[1]: Started cri-containerd-d91b9e14799791a926662033edc9d70f2ce719efa0b60c302e0a0392228ec051.scope - libcontainer container d91b9e14799791a926662033edc9d70f2ce719efa0b60c302e0a0392228ec051. Dec 12 18:39:51.326477 containerd[1527]: time="2025-12-12T18:39:51.326369870Z" level=info msg="StartContainer for \"d91b9e14799791a926662033edc9d70f2ce719efa0b60c302e0a0392228ec051\" returns successfully" Dec 12 18:39:51.725545 kubelet[2702]: E1212 18:39:51.725483 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:51.774379 kubelet[2702]: E1212 18:39:51.773900 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.774379 kubelet[2702]: W1212 18:39:51.773929 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.774379 kubelet[2702]: E1212 18:39:51.773957 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.776001 kubelet[2702]: E1212 18:39:51.775963 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.776567 kubelet[2702]: W1212 18:39:51.776458 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.777128 kubelet[2702]: E1212 18:39:51.776492 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.777342 kubelet[2702]: E1212 18:39:51.777286 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.777597 kubelet[2702]: W1212 18:39:51.777579 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.777766 kubelet[2702]: E1212 18:39:51.777752 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.779402 kubelet[2702]: E1212 18:39:51.779231 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.779402 kubelet[2702]: W1212 18:39:51.779259 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.779402 kubelet[2702]: E1212 18:39:51.779277 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.779794 kubelet[2702]: E1212 18:39:51.779764 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.779993 kubelet[2702]: W1212 18:39:51.779914 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.779993 kubelet[2702]: E1212 18:39:51.779937 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.782061 kubelet[2702]: E1212 18:39:51.781445 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.782435 kubelet[2702]: W1212 18:39:51.782246 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.782435 kubelet[2702]: E1212 18:39:51.782278 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.782617 kubelet[2702]: E1212 18:39:51.782605 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.782684 kubelet[2702]: W1212 18:39:51.782673 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.782736 kubelet[2702]: E1212 18:39:51.782727 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.784049 kubelet[2702]: E1212 18:39:51.783110 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.784049 kubelet[2702]: W1212 18:39:51.783123 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.784049 kubelet[2702]: E1212 18:39:51.783136 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.784354 kubelet[2702]: E1212 18:39:51.784341 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.784580 kubelet[2702]: W1212 18:39:51.784380 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.784580 kubelet[2702]: E1212 18:39:51.784397 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.784695 kubelet[2702]: E1212 18:39:51.784684 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.784745 kubelet[2702]: W1212 18:39:51.784736 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.784798 kubelet[2702]: E1212 18:39:51.784789 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.785275 kubelet[2702]: E1212 18:39:51.785163 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.785275 kubelet[2702]: W1212 18:39:51.785176 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.785275 kubelet[2702]: E1212 18:39:51.785188 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.786217 kubelet[2702]: E1212 18:39:51.786200 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.786306 kubelet[2702]: W1212 18:39:51.786295 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.786369 kubelet[2702]: E1212 18:39:51.786353 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.787111 kubelet[2702]: E1212 18:39:51.786653 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.787111 kubelet[2702]: W1212 18:39:51.786666 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.787111 kubelet[2702]: E1212 18:39:51.786677 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.787299 kubelet[2702]: E1212 18:39:51.787287 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.787438 kubelet[2702]: W1212 18:39:51.787340 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.787438 kubelet[2702]: E1212 18:39:51.787354 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.788461 kubelet[2702]: E1212 18:39:51.788200 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.788573 kubelet[2702]: W1212 18:39:51.788557 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.788629 kubelet[2702]: E1212 18:39:51.788620 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.815684 kubelet[2702]: E1212 18:39:51.815636 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.815684 kubelet[2702]: W1212 18:39:51.815665 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.815684 kubelet[2702]: E1212 18:39:51.815689 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.816139 kubelet[2702]: E1212 18:39:51.815998 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.816139 kubelet[2702]: W1212 18:39:51.816009 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.816139 kubelet[2702]: E1212 18:39:51.816059 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.816458 kubelet[2702]: E1212 18:39:51.816290 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.816458 kubelet[2702]: W1212 18:39:51.816298 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.816458 kubelet[2702]: E1212 18:39:51.816312 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.816654 kubelet[2702]: E1212 18:39:51.816526 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.816654 kubelet[2702]: W1212 18:39:51.816534 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.816654 kubelet[2702]: E1212 18:39:51.816548 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.816799 kubelet[2702]: E1212 18:39:51.816703 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.816799 kubelet[2702]: W1212 18:39:51.816710 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.816799 kubelet[2702]: E1212 18:39:51.816729 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.817020 kubelet[2702]: E1212 18:39:51.816880 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.817020 kubelet[2702]: W1212 18:39:51.816886 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.817020 kubelet[2702]: E1212 18:39:51.816923 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.817206 kubelet[2702]: E1212 18:39:51.817081 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.817206 kubelet[2702]: W1212 18:39:51.817094 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.817206 kubelet[2702]: E1212 18:39:51.817176 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.817367 kubelet[2702]: E1212 18:39:51.817350 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.817367 kubelet[2702]: W1212 18:39:51.817366 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.817648 kubelet[2702]: E1212 18:39:51.817381 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.817648 kubelet[2702]: E1212 18:39:51.817525 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.817648 kubelet[2702]: W1212 18:39:51.817531 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.817648 kubelet[2702]: E1212 18:39:51.817539 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.817859 kubelet[2702]: E1212 18:39:51.817670 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.817859 kubelet[2702]: W1212 18:39:51.817679 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.817859 kubelet[2702]: E1212 18:39:51.817703 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.818170 kubelet[2702]: E1212 18:39:51.818009 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.818170 kubelet[2702]: W1212 18:39:51.818021 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.818170 kubelet[2702]: E1212 18:39:51.818063 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.818799 kubelet[2702]: E1212 18:39:51.818694 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.818799 kubelet[2702]: W1212 18:39:51.818709 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.818799 kubelet[2702]: E1212 18:39:51.818751 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.818987 kubelet[2702]: E1212 18:39:51.818907 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.818987 kubelet[2702]: W1212 18:39:51.818919 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.819095 kubelet[2702]: E1212 18:39:51.819078 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.819095 kubelet[2702]: W1212 18:39:51.819087 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.819199 kubelet[2702]: E1212 18:39:51.819096 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.819248 kubelet[2702]: E1212 18:39:51.819216 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.819248 kubelet[2702]: W1212 18:39:51.819222 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.819248 kubelet[2702]: E1212 18:39:51.819230 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.819374 kubelet[2702]: E1212 18:39:51.819370 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.819422 kubelet[2702]: W1212 18:39:51.819376 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.819422 kubelet[2702]: E1212 18:39:51.819383 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.819922 kubelet[2702]: E1212 18:39:51.819544 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.819922 kubelet[2702]: E1212 18:39:51.819721 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.819922 kubelet[2702]: W1212 18:39:51.819732 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.819922 kubelet[2702]: E1212 18:39:51.819745 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:51.820311 kubelet[2702]: E1212 18:39:51.820062 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:51.820311 kubelet[2702]: W1212 18:39:51.820079 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:51.820311 kubelet[2702]: E1212 18:39:51.820097 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.590206 kubelet[2702]: E1212 18:39:52.590152 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2flfr" podUID="efd42e2e-3a4f-425a-9c07-184c94bcbd7e" Dec 12 18:39:52.726007 kubelet[2702]: I1212 18:39:52.725947 2702 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:39:52.727056 kubelet[2702]: E1212 18:39:52.726797 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:52.743390 containerd[1527]: time="2025-12-12T18:39:52.743280858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:52.744565 containerd[1527]: time="2025-12-12T18:39:52.744525835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Dec 12 18:39:52.745514 containerd[1527]: time="2025-12-12T18:39:52.745471266Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:52.747847 containerd[1527]: time="2025-12-12T18:39:52.747802731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:52.748841 containerd[1527]: time="2025-12-12T18:39:52.748797293Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.583660288s" Dec 12 18:39:52.748938 containerd[1527]: time="2025-12-12T18:39:52.748846276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 12 18:39:52.753703 containerd[1527]: time="2025-12-12T18:39:52.753661695Z" level=info msg="CreateContainer within sandbox \"b18e3921a8a1a88a09d84a392d90c662d21accd20ab5a736fd4e1bcd28d04f4a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 18:39:52.763193 containerd[1527]: time="2025-12-12T18:39:52.763150188Z" level=info msg="Container d7e5c840a175546052168ccfc2ce4bc20e3e0297b546be416342720071d82ba7: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:52.783734 containerd[1527]: time="2025-12-12T18:39:52.783643769Z" level=info msg="CreateContainer within sandbox \"b18e3921a8a1a88a09d84a392d90c662d21accd20ab5a736fd4e1bcd28d04f4a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d7e5c840a175546052168ccfc2ce4bc20e3e0297b546be416342720071d82ba7\"" Dec 12 18:39:52.784787 containerd[1527]: time="2025-12-12T18:39:52.784563093Z" level=info msg="StartContainer for \"d7e5c840a175546052168ccfc2ce4bc20e3e0297b546be416342720071d82ba7\"" Dec 12 18:39:52.792821 containerd[1527]: time="2025-12-12T18:39:52.792769609Z" level=info msg="connecting to shim d7e5c840a175546052168ccfc2ce4bc20e3e0297b546be416342720071d82ba7" address="unix:///run/containerd/s/640495d58fce1ed7f13a6110be4f9cf6bd01099f0536a16d8590dcc82c2625ab" protocol=ttrpc version=3 Dec 12 18:39:52.795045 kubelet[2702]: E1212 18:39:52.794569 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.795045 kubelet[2702]: W1212 18:39:52.794614 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.795045 kubelet[2702]: E1212 18:39:52.794643 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.795045 kubelet[2702]: E1212 18:39:52.794832 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.795045 kubelet[2702]: W1212 18:39:52.794839 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.795045 kubelet[2702]: E1212 18:39:52.794848 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.795045 kubelet[2702]: E1212 18:39:52.794998 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.795045 kubelet[2702]: W1212 18:39:52.795004 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.795045 kubelet[2702]: E1212 18:39:52.795011 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.795659 kubelet[2702]: E1212 18:39:52.795644 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.796810 kubelet[2702]: W1212 18:39:52.795714 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.796810 kubelet[2702]: E1212 18:39:52.795729 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.796810 kubelet[2702]: E1212 18:39:52.796692 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.796810 kubelet[2702]: W1212 18:39:52.796715 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.796810 kubelet[2702]: E1212 18:39:52.796728 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.797191 kubelet[2702]: E1212 18:39:52.797178 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.797271 kubelet[2702]: W1212 18:39:52.797261 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.797343 kubelet[2702]: E1212 18:39:52.797334 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.797601 kubelet[2702]: E1212 18:39:52.797590 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.800254 kubelet[2702]: W1212 18:39:52.800093 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.800254 kubelet[2702]: E1212 18:39:52.800123 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.800778 kubelet[2702]: E1212 18:39:52.800593 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.800778 kubelet[2702]: W1212 18:39:52.800624 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.800778 kubelet[2702]: E1212 18:39:52.800642 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.801421 kubelet[2702]: E1212 18:39:52.801381 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.801829 kubelet[2702]: W1212 18:39:52.801668 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.801829 kubelet[2702]: E1212 18:39:52.801686 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.804217 kubelet[2702]: E1212 18:39:52.804094 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.806450 kubelet[2702]: W1212 18:39:52.806159 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.806450 kubelet[2702]: E1212 18:39:52.806222 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.806968 kubelet[2702]: E1212 18:39:52.806948 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.807729 kubelet[2702]: W1212 18:39:52.807270 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.807729 kubelet[2702]: E1212 18:39:52.807297 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.808175 kubelet[2702]: E1212 18:39:52.807870 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.808175 kubelet[2702]: W1212 18:39:52.807884 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.808175 kubelet[2702]: E1212 18:39:52.807899 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.808647 kubelet[2702]: E1212 18:39:52.808549 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.808991 kubelet[2702]: W1212 18:39:52.808875 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.808991 kubelet[2702]: E1212 18:39:52.808896 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.809604 kubelet[2702]: E1212 18:39:52.809563 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.809836 kubelet[2702]: W1212 18:39:52.809665 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.809836 kubelet[2702]: E1212 18:39:52.809683 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.810041 kubelet[2702]: E1212 18:39:52.810018 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.810576 kubelet[2702]: W1212 18:39:52.810321 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.810576 kubelet[2702]: E1212 18:39:52.810344 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.823630 kubelet[2702]: E1212 18:39:52.823597 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.823811 kubelet[2702]: W1212 18:39:52.823791 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.823952 kubelet[2702]: E1212 18:39:52.823866 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.824943 kubelet[2702]: E1212 18:39:52.824753 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.824943 kubelet[2702]: W1212 18:39:52.824875 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.824943 kubelet[2702]: E1212 18:39:52.824898 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.825293 systemd[1]: Started cri-containerd-d7e5c840a175546052168ccfc2ce4bc20e3e0297b546be416342720071d82ba7.scope - libcontainer container d7e5c840a175546052168ccfc2ce4bc20e3e0297b546be416342720071d82ba7. Dec 12 18:39:52.826682 kubelet[2702]: E1212 18:39:52.825621 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.826682 kubelet[2702]: W1212 18:39:52.825635 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.826682 kubelet[2702]: E1212 18:39:52.825655 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.826682 kubelet[2702]: E1212 18:39:52.825844 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.826682 kubelet[2702]: W1212 18:39:52.825853 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.826682 kubelet[2702]: E1212 18:39:52.825862 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.826682 kubelet[2702]: E1212 18:39:52.825991 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.826682 kubelet[2702]: W1212 18:39:52.825997 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.826682 kubelet[2702]: E1212 18:39:52.826005 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.826682 kubelet[2702]: E1212 18:39:52.826256 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.827108 kubelet[2702]: W1212 18:39:52.826270 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.827108 kubelet[2702]: E1212 18:39:52.826282 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.827108 kubelet[2702]: E1212 18:39:52.826668 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.827108 kubelet[2702]: W1212 18:39:52.826681 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.827108 kubelet[2702]: E1212 18:39:52.826695 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.827108 kubelet[2702]: E1212 18:39:52.826857 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.827108 kubelet[2702]: W1212 18:39:52.826865 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.827108 kubelet[2702]: E1212 18:39:52.826876 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.827108 kubelet[2702]: E1212 18:39:52.827046 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.827108 kubelet[2702]: W1212 18:39:52.827053 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.827481 kubelet[2702]: E1212 18:39:52.827061 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.827481 kubelet[2702]: E1212 18:39:52.827194 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.827481 kubelet[2702]: W1212 18:39:52.827200 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.827481 kubelet[2702]: E1212 18:39:52.827207 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.827481 kubelet[2702]: E1212 18:39:52.827318 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.827481 kubelet[2702]: W1212 18:39:52.827323 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.827481 kubelet[2702]: E1212 18:39:52.827330 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.827481 kubelet[2702]: E1212 18:39:52.827472 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.827481 kubelet[2702]: W1212 18:39:52.827478 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.827481 kubelet[2702]: E1212 18:39:52.827485 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.828974 kubelet[2702]: E1212 18:39:52.827795 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.828974 kubelet[2702]: W1212 18:39:52.827805 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.828974 kubelet[2702]: E1212 18:39:52.827815 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.828974 kubelet[2702]: E1212 18:39:52.827950 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.828974 kubelet[2702]: W1212 18:39:52.827956 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.828974 kubelet[2702]: E1212 18:39:52.827963 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.828974 kubelet[2702]: E1212 18:39:52.828115 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.828974 kubelet[2702]: W1212 18:39:52.828121 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.828974 kubelet[2702]: E1212 18:39:52.828129 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.828974 kubelet[2702]: E1212 18:39:52.828252 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.830085 kubelet[2702]: W1212 18:39:52.828264 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.830085 kubelet[2702]: E1212 18:39:52.828271 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.830085 kubelet[2702]: E1212 18:39:52.829405 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.830085 kubelet[2702]: W1212 18:39:52.829425 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.830085 kubelet[2702]: E1212 18:39:52.829442 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.830085 kubelet[2702]: E1212 18:39:52.830015 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:52.830085 kubelet[2702]: W1212 18:39:52.830049 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:52.830085 kubelet[2702]: E1212 18:39:52.830062 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:52.937423 containerd[1527]: time="2025-12-12T18:39:52.937261784Z" level=info msg="StartContainer for \"d7e5c840a175546052168ccfc2ce4bc20e3e0297b546be416342720071d82ba7\" returns successfully" Dec 12 18:39:52.957333 systemd[1]: cri-containerd-d7e5c840a175546052168ccfc2ce4bc20e3e0297b546be416342720071d82ba7.scope: Deactivated successfully. Dec 12 18:39:53.000502 containerd[1527]: time="2025-12-12T18:39:53.000430146Z" level=info msg="received container exit event container_id:\"d7e5c840a175546052168ccfc2ce4bc20e3e0297b546be416342720071d82ba7\" id:\"d7e5c840a175546052168ccfc2ce4bc20e3e0297b546be416342720071d82ba7\" pid:3442 exited_at:{seconds:1765564792 nanos:964667299}" Dec 12 18:39:53.038535 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7e5c840a175546052168ccfc2ce4bc20e3e0297b546be416342720071d82ba7-rootfs.mount: Deactivated successfully. Dec 12 18:39:53.733000 kubelet[2702]: E1212 18:39:53.732950 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:53.738194 containerd[1527]: time="2025-12-12T18:39:53.737800871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 18:39:53.758970 kubelet[2702]: I1212 18:39:53.758163 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-fbb485855-pz7pn" podStartSLOduration=5.329991232 podStartE2EDuration="7.758114657s" podCreationTimestamp="2025-12-12 18:39:46 +0000 UTC" firstStartedPulling="2025-12-12 18:39:48.736263773 +0000 UTC m=+27.325926951" lastFinishedPulling="2025-12-12 18:39:51.16438719 +0000 UTC m=+29.754050376" observedRunningTime="2025-12-12 18:39:51.767776679 +0000 UTC m=+30.357439864" watchObservedRunningTime="2025-12-12 18:39:53.758114657 +0000 UTC m=+32.347777843" Dec 12 18:39:54.590458 kubelet[2702]: E1212 18:39:54.590349 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2flfr" podUID="efd42e2e-3a4f-425a-9c07-184c94bcbd7e" Dec 12 18:39:56.590990 kubelet[2702]: E1212 18:39:56.590252 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2flfr" podUID="efd42e2e-3a4f-425a-9c07-184c94bcbd7e" Dec 12 18:39:57.665986 containerd[1527]: time="2025-12-12T18:39:57.665910178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:57.667514 containerd[1527]: time="2025-12-12T18:39:57.667209273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 12 18:39:57.668355 containerd[1527]: time="2025-12-12T18:39:57.668316236Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:57.670919 containerd[1527]: time="2025-12-12T18:39:57.670833478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:57.672258 containerd[1527]: time="2025-12-12T18:39:57.672215813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.934360264s" Dec 12 18:39:57.672436 containerd[1527]: time="2025-12-12T18:39:57.672416674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 12 18:39:57.676702 containerd[1527]: time="2025-12-12T18:39:57.676630002Z" level=info msg="CreateContainer within sandbox \"b18e3921a8a1a88a09d84a392d90c662d21accd20ab5a736fd4e1bcd28d04f4a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 18:39:57.736440 containerd[1527]: time="2025-12-12T18:39:57.736276338Z" level=info msg="Container 679ffe71eba9a6c321d8db8d3f9024cd110749d3c7076fa2095497ddf8bbd13a: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:57.750021 containerd[1527]: time="2025-12-12T18:39:57.749954690Z" level=info msg="CreateContainer within sandbox \"b18e3921a8a1a88a09d84a392d90c662d21accd20ab5a736fd4e1bcd28d04f4a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"679ffe71eba9a6c321d8db8d3f9024cd110749d3c7076fa2095497ddf8bbd13a\"" Dec 12 18:39:57.751292 containerd[1527]: time="2025-12-12T18:39:57.751241667Z" level=info msg="StartContainer for \"679ffe71eba9a6c321d8db8d3f9024cd110749d3c7076fa2095497ddf8bbd13a\"" Dec 12 18:39:57.756664 containerd[1527]: time="2025-12-12T18:39:57.756610924Z" level=info msg="connecting to shim 679ffe71eba9a6c321d8db8d3f9024cd110749d3c7076fa2095497ddf8bbd13a" address="unix:///run/containerd/s/640495d58fce1ed7f13a6110be4f9cf6bd01099f0536a16d8590dcc82c2625ab" protocol=ttrpc version=3 Dec 12 18:39:57.793458 systemd[1]: Started cri-containerd-679ffe71eba9a6c321d8db8d3f9024cd110749d3c7076fa2095497ddf8bbd13a.scope - libcontainer container 679ffe71eba9a6c321d8db8d3f9024cd110749d3c7076fa2095497ddf8bbd13a. Dec 12 18:39:57.928917 containerd[1527]: time="2025-12-12T18:39:57.928768189Z" level=info msg="StartContainer for \"679ffe71eba9a6c321d8db8d3f9024cd110749d3c7076fa2095497ddf8bbd13a\" returns successfully" Dec 12 18:39:58.554136 systemd[1]: cri-containerd-679ffe71eba9a6c321d8db8d3f9024cd110749d3c7076fa2095497ddf8bbd13a.scope: Deactivated successfully. Dec 12 18:39:58.555774 systemd[1]: cri-containerd-679ffe71eba9a6c321d8db8d3f9024cd110749d3c7076fa2095497ddf8bbd13a.scope: Consumed 624ms CPU time, 171.6M memory peak, 16.1M read from disk, 171.3M written to disk. Dec 12 18:39:58.557436 containerd[1527]: time="2025-12-12T18:39:58.557019816Z" level=info msg="received container exit event container_id:\"679ffe71eba9a6c321d8db8d3f9024cd110749d3c7076fa2095497ddf8bbd13a\" id:\"679ffe71eba9a6c321d8db8d3f9024cd110749d3c7076fa2095497ddf8bbd13a\" pid:3497 exited_at:{seconds:1765564798 nanos:555464384}" Dec 12 18:39:58.591302 kubelet[2702]: E1212 18:39:58.591077 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2flfr" podUID="efd42e2e-3a4f-425a-9c07-184c94bcbd7e" Dec 12 18:39:58.612984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-679ffe71eba9a6c321d8db8d3f9024cd110749d3c7076fa2095497ddf8bbd13a-rootfs.mount: Deactivated successfully. Dec 12 18:39:58.616773 kubelet[2702]: I1212 18:39:58.616724 2702 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 18:39:58.682084 systemd[1]: Created slice kubepods-besteffort-poda92ba609_6740_41f9_b09b_6687ca1c0ede.slice - libcontainer container kubepods-besteffort-poda92ba609_6740_41f9_b09b_6687ca1c0ede.slice. Dec 12 18:39:58.693406 systemd[1]: Created slice kubepods-burstable-pod8c3a3740_82ee_4426_99af_2d870802441b.slice - libcontainer container kubepods-burstable-pod8c3a3740_82ee_4426_99af_2d870802441b.slice. Dec 12 18:39:58.712063 systemd[1]: Created slice kubepods-burstable-poddaebf512_6449_40e5_9d51_ce9fe62d676e.slice - libcontainer container kubepods-burstable-poddaebf512_6449_40e5_9d51_ce9fe62d676e.slice. Dec 12 18:39:58.727767 systemd[1]: Created slice kubepods-besteffort-pod4b3492e4_bb68_4901_abef_c77777942b00.slice - libcontainer container kubepods-besteffort-pod4b3492e4_bb68_4901_abef_c77777942b00.slice. Dec 12 18:39:58.737633 systemd[1]: Created slice kubepods-besteffort-podb0d854b1_0e79_4d9c_914e_0384874af8c1.slice - libcontainer container kubepods-besteffort-podb0d854b1_0e79_4d9c_914e_0384874af8c1.slice. Dec 12 18:39:58.755542 systemd[1]: Created slice kubepods-besteffort-podce317868_d60c_4160_8f9f_88083979fb20.slice - libcontainer container kubepods-besteffort-podce317868_d60c_4160_8f9f_88083979fb20.slice. Dec 12 18:39:58.767116 systemd[1]: Created slice kubepods-besteffort-pod779cb8c2_ad5e_4f5f_95e6_a4da96b7be10.slice - libcontainer container kubepods-besteffort-pod779cb8c2_ad5e_4f5f_95e6_a4da96b7be10.slice. Dec 12 18:39:58.781814 kubelet[2702]: I1212 18:39:58.781762 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pv5s\" (UniqueName: \"kubernetes.io/projected/4b3492e4-bb68-4901-abef-c77777942b00-kube-api-access-2pv5s\") pod \"calico-apiserver-7cb7c9749d-j4f49\" (UID: \"4b3492e4-bb68-4901-abef-c77777942b00\") " pod="calico-apiserver/calico-apiserver-7cb7c9749d-j4f49" Dec 12 18:39:58.782242 kubelet[2702]: I1212 18:39:58.782211 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0d854b1-0e79-4d9c-914e-0384874af8c1-goldmane-ca-bundle\") pod \"goldmane-666569f655-6s6gc\" (UID: \"b0d854b1-0e79-4d9c-914e-0384874af8c1\") " pod="calico-system/goldmane-666569f655-6s6gc" Dec 12 18:39:58.784694 kubelet[2702]: I1212 18:39:58.784643 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce317868-d60c-4160-8f9f-88083979fb20-whisker-backend-key-pair\") pod \"whisker-6f86c689f8-pcmpb\" (UID: \"ce317868-d60c-4160-8f9f-88083979fb20\") " pod="calico-system/whisker-6f86c689f8-pcmpb" Dec 12 18:39:58.784819 kubelet[2702]: I1212 18:39:58.784703 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rm2c\" (UniqueName: \"kubernetes.io/projected/779cb8c2-ad5e-4f5f-95e6-a4da96b7be10-kube-api-access-2rm2c\") pod \"calico-kube-controllers-65df8ff88f-mf8qj\" (UID: \"779cb8c2-ad5e-4f5f-95e6-a4da96b7be10\") " pod="calico-system/calico-kube-controllers-65df8ff88f-mf8qj" Dec 12 18:39:58.784819 kubelet[2702]: I1212 18:39:58.784731 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a92ba609-6740-41f9-b09b-6687ca1c0ede-calico-apiserver-certs\") pod \"calico-apiserver-7cb7c9749d-wbbnc\" (UID: \"a92ba609-6740-41f9-b09b-6687ca1c0ede\") " pod="calico-apiserver/calico-apiserver-7cb7c9749d-wbbnc" Dec 12 18:39:58.784819 kubelet[2702]: I1212 18:39:58.784764 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4b3492e4-bb68-4901-abef-c77777942b00-calico-apiserver-certs\") pod \"calico-apiserver-7cb7c9749d-j4f49\" (UID: \"4b3492e4-bb68-4901-abef-c77777942b00\") " pod="calico-apiserver/calico-apiserver-7cb7c9749d-j4f49" Dec 12 18:39:58.784819 kubelet[2702]: I1212 18:39:58.784791 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cwmq\" (UniqueName: \"kubernetes.io/projected/b0d854b1-0e79-4d9c-914e-0384874af8c1-kube-api-access-7cwmq\") pod \"goldmane-666569f655-6s6gc\" (UID: \"b0d854b1-0e79-4d9c-914e-0384874af8c1\") " pod="calico-system/goldmane-666569f655-6s6gc" Dec 12 18:39:58.784819 kubelet[2702]: I1212 18:39:58.784816 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c3a3740-82ee-4426-99af-2d870802441b-config-volume\") pod \"coredns-668d6bf9bc-9kb82\" (UID: \"8c3a3740-82ee-4426-99af-2d870802441b\") " pod="kube-system/coredns-668d6bf9bc-9kb82" Dec 12 18:39:58.785107 kubelet[2702]: I1212 18:39:58.784845 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt4mc\" (UniqueName: \"kubernetes.io/projected/ce317868-d60c-4160-8f9f-88083979fb20-kube-api-access-lt4mc\") pod \"whisker-6f86c689f8-pcmpb\" (UID: \"ce317868-d60c-4160-8f9f-88083979fb20\") " pod="calico-system/whisker-6f86c689f8-pcmpb" Dec 12 18:39:58.785107 kubelet[2702]: I1212 18:39:58.784877 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/daebf512-6449-40e5-9d51-ce9fe62d676e-config-volume\") pod \"coredns-668d6bf9bc-s9cv7\" (UID: \"daebf512-6449-40e5-9d51-ce9fe62d676e\") " pod="kube-system/coredns-668d6bf9bc-s9cv7" Dec 12 18:39:58.785107 kubelet[2702]: I1212 18:39:58.784906 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxtkz\" (UniqueName: \"kubernetes.io/projected/8c3a3740-82ee-4426-99af-2d870802441b-kube-api-access-hxtkz\") pod \"coredns-668d6bf9bc-9kb82\" (UID: \"8c3a3740-82ee-4426-99af-2d870802441b\") " pod="kube-system/coredns-668d6bf9bc-9kb82" Dec 12 18:39:58.785107 kubelet[2702]: I1212 18:39:58.784935 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgxx8\" (UniqueName: \"kubernetes.io/projected/daebf512-6449-40e5-9d51-ce9fe62d676e-kube-api-access-wgxx8\") pod \"coredns-668d6bf9bc-s9cv7\" (UID: \"daebf512-6449-40e5-9d51-ce9fe62d676e\") " pod="kube-system/coredns-668d6bf9bc-s9cv7" Dec 12 18:39:58.785107 kubelet[2702]: I1212 18:39:58.784973 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxqg6\" (UniqueName: \"kubernetes.io/projected/a92ba609-6740-41f9-b09b-6687ca1c0ede-kube-api-access-pxqg6\") pod \"calico-apiserver-7cb7c9749d-wbbnc\" (UID: \"a92ba609-6740-41f9-b09b-6687ca1c0ede\") " pod="calico-apiserver/calico-apiserver-7cb7c9749d-wbbnc" Dec 12 18:39:58.785574 kubelet[2702]: I1212 18:39:58.785003 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0d854b1-0e79-4d9c-914e-0384874af8c1-config\") pod \"goldmane-666569f655-6s6gc\" (UID: \"b0d854b1-0e79-4d9c-914e-0384874af8c1\") " pod="calico-system/goldmane-666569f655-6s6gc" Dec 12 18:39:58.787050 kubelet[2702]: I1212 18:39:58.786138 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b0d854b1-0e79-4d9c-914e-0384874af8c1-goldmane-key-pair\") pod \"goldmane-666569f655-6s6gc\" (UID: \"b0d854b1-0e79-4d9c-914e-0384874af8c1\") " pod="calico-system/goldmane-666569f655-6s6gc" Dec 12 18:39:58.787050 kubelet[2702]: I1212 18:39:58.786326 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce317868-d60c-4160-8f9f-88083979fb20-whisker-ca-bundle\") pod \"whisker-6f86c689f8-pcmpb\" (UID: \"ce317868-d60c-4160-8f9f-88083979fb20\") " pod="calico-system/whisker-6f86c689f8-pcmpb" Dec 12 18:39:58.787050 kubelet[2702]: I1212 18:39:58.786620 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/779cb8c2-ad5e-4f5f-95e6-a4da96b7be10-tigera-ca-bundle\") pod \"calico-kube-controllers-65df8ff88f-mf8qj\" (UID: \"779cb8c2-ad5e-4f5f-95e6-a4da96b7be10\") " pod="calico-system/calico-kube-controllers-65df8ff88f-mf8qj" Dec 12 18:39:58.797468 kubelet[2702]: E1212 18:39:58.797217 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:58.799528 containerd[1527]: time="2025-12-12T18:39:58.799410315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 18:39:59.006148 kubelet[2702]: E1212 18:39:59.005354 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:59.009155 containerd[1527]: time="2025-12-12T18:39:59.008366944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9kb82,Uid:8c3a3740-82ee-4426-99af-2d870802441b,Namespace:kube-system,Attempt:0,}" Dec 12 18:39:59.022437 kubelet[2702]: E1212 18:39:59.021587 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:59.022613 containerd[1527]: time="2025-12-12T18:39:59.022506627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s9cv7,Uid:daebf512-6449-40e5-9d51-ce9fe62d676e,Namespace:kube-system,Attempt:0,}" Dec 12 18:39:59.043980 containerd[1527]: time="2025-12-12T18:39:59.043933938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb7c9749d-j4f49,Uid:4b3492e4-bb68-4901-abef-c77777942b00,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:39:59.050054 containerd[1527]: time="2025-12-12T18:39:59.048567155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-6s6gc,Uid:b0d854b1-0e79-4d9c-914e-0384874af8c1,Namespace:calico-system,Attempt:0,}" Dec 12 18:39:59.066054 containerd[1527]: time="2025-12-12T18:39:59.065128595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f86c689f8-pcmpb,Uid:ce317868-d60c-4160-8f9f-88083979fb20,Namespace:calico-system,Attempt:0,}" Dec 12 18:39:59.104297 containerd[1527]: time="2025-12-12T18:39:59.103270994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65df8ff88f-mf8qj,Uid:779cb8c2-ad5e-4f5f-95e6-a4da96b7be10,Namespace:calico-system,Attempt:0,}" Dec 12 18:39:59.305609 containerd[1527]: time="2025-12-12T18:39:59.305560457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb7c9749d-wbbnc,Uid:a92ba609-6740-41f9-b09b-6687ca1c0ede,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:39:59.390635 containerd[1527]: time="2025-12-12T18:39:59.390508005Z" level=error msg="Failed to destroy network for sandbox \"484ef1b39a18c4847101b4b2e43cdb67129cd14d93acad84761ff8bb1bbf0a0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.394774 containerd[1527]: time="2025-12-12T18:39:59.394504151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb7c9749d-j4f49,Uid:4b3492e4-bb68-4901-abef-c77777942b00,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"484ef1b39a18c4847101b4b2e43cdb67129cd14d93acad84761ff8bb1bbf0a0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.399993 kubelet[2702]: E1212 18:39:59.399649 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"484ef1b39a18c4847101b4b2e43cdb67129cd14d93acad84761ff8bb1bbf0a0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.399993 kubelet[2702]: E1212 18:39:59.399740 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"484ef1b39a18c4847101b4b2e43cdb67129cd14d93acad84761ff8bb1bbf0a0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cb7c9749d-j4f49" Dec 12 18:39:59.399993 kubelet[2702]: E1212 18:39:59.399766 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"484ef1b39a18c4847101b4b2e43cdb67129cd14d93acad84761ff8bb1bbf0a0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cb7c9749d-j4f49" Dec 12 18:39:59.400235 kubelet[2702]: E1212 18:39:59.399816 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cb7c9749d-j4f49_calico-apiserver(4b3492e4-bb68-4901-abef-c77777942b00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cb7c9749d-j4f49_calico-apiserver(4b3492e4-bb68-4901-abef-c77777942b00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"484ef1b39a18c4847101b4b2e43cdb67129cd14d93acad84761ff8bb1bbf0a0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cb7c9749d-j4f49" podUID="4b3492e4-bb68-4901-abef-c77777942b00" Dec 12 18:39:59.418482 containerd[1527]: time="2025-12-12T18:39:59.418380771Z" level=error msg="Failed to destroy network for sandbox \"79d9824e383f831d044783ed017b784eab95f19231b978d049bfc244ed24bd37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.427212 containerd[1527]: time="2025-12-12T18:39:59.426200397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f86c689f8-pcmpb,Uid:ce317868-d60c-4160-8f9f-88083979fb20,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d9824e383f831d044783ed017b784eab95f19231b978d049bfc244ed24bd37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.428802 kubelet[2702]: E1212 18:39:59.427594 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d9824e383f831d044783ed017b784eab95f19231b978d049bfc244ed24bd37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.428802 kubelet[2702]: E1212 18:39:59.427678 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d9824e383f831d044783ed017b784eab95f19231b978d049bfc244ed24bd37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f86c689f8-pcmpb" Dec 12 18:39:59.428802 kubelet[2702]: E1212 18:39:59.427705 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d9824e383f831d044783ed017b784eab95f19231b978d049bfc244ed24bd37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f86c689f8-pcmpb" Dec 12 18:39:59.429070 kubelet[2702]: E1212 18:39:59.427750 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f86c689f8-pcmpb_calico-system(ce317868-d60c-4160-8f9f-88083979fb20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f86c689f8-pcmpb_calico-system(ce317868-d60c-4160-8f9f-88083979fb20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79d9824e383f831d044783ed017b784eab95f19231b978d049bfc244ed24bd37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f86c689f8-pcmpb" podUID="ce317868-d60c-4160-8f9f-88083979fb20" Dec 12 18:39:59.465713 containerd[1527]: time="2025-12-12T18:39:59.465652519Z" level=error msg="Failed to destroy network for sandbox \"a03542d62887c7c365c82488b33adbefecc3adad34a9095914f669e88df03923\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.470806 containerd[1527]: time="2025-12-12T18:39:59.470735245Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s9cv7,Uid:daebf512-6449-40e5-9d51-ce9fe62d676e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a03542d62887c7c365c82488b33adbefecc3adad34a9095914f669e88df03923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.471369 kubelet[2702]: E1212 18:39:59.471020 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a03542d62887c7c365c82488b33adbefecc3adad34a9095914f669e88df03923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.471369 kubelet[2702]: E1212 18:39:59.471328 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a03542d62887c7c365c82488b33adbefecc3adad34a9095914f669e88df03923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s9cv7" Dec 12 18:39:59.471702 kubelet[2702]: E1212 18:39:59.471357 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a03542d62887c7c365c82488b33adbefecc3adad34a9095914f669e88df03923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s9cv7" Dec 12 18:39:59.471702 kubelet[2702]: E1212 18:39:59.471619 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-s9cv7_kube-system(daebf512-6449-40e5-9d51-ce9fe62d676e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-s9cv7_kube-system(daebf512-6449-40e5-9d51-ce9fe62d676e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a03542d62887c7c365c82488b33adbefecc3adad34a9095914f669e88df03923\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s9cv7" podUID="daebf512-6449-40e5-9d51-ce9fe62d676e" Dec 12 18:39:59.475555 containerd[1527]: time="2025-12-12T18:39:59.475405880Z" level=error msg="Failed to destroy network for sandbox \"d16eff87574d468ddf6fe5663ef6a03a7520a9f9ebc78f3ae40ef9aa6139203f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.478223 containerd[1527]: time="2025-12-12T18:39:59.478111177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-6s6gc,Uid:b0d854b1-0e79-4d9c-914e-0384874af8c1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d16eff87574d468ddf6fe5663ef6a03a7520a9f9ebc78f3ae40ef9aa6139203f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.479268 kubelet[2702]: E1212 18:39:59.479193 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d16eff87574d468ddf6fe5663ef6a03a7520a9f9ebc78f3ae40ef9aa6139203f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.479425 kubelet[2702]: E1212 18:39:59.479280 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d16eff87574d468ddf6fe5663ef6a03a7520a9f9ebc78f3ae40ef9aa6139203f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-6s6gc" Dec 12 18:39:59.479425 kubelet[2702]: E1212 18:39:59.479325 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d16eff87574d468ddf6fe5663ef6a03a7520a9f9ebc78f3ae40ef9aa6139203f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-6s6gc" Dec 12 18:39:59.479876 kubelet[2702]: E1212 18:39:59.479411 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-6s6gc_calico-system(b0d854b1-0e79-4d9c-914e-0384874af8c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-6s6gc_calico-system(b0d854b1-0e79-4d9c-914e-0384874af8c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d16eff87574d468ddf6fe5663ef6a03a7520a9f9ebc78f3ae40ef9aa6139203f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-6s6gc" podUID="b0d854b1-0e79-4d9c-914e-0384874af8c1" Dec 12 18:39:59.487076 containerd[1527]: time="2025-12-12T18:39:59.486999348Z" level=error msg="Failed to destroy network for sandbox \"4933eee8e9aa6ef953a00d9bda5623b9b2d03902fd352c7b4a478a4b48dcf2cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.488808 containerd[1527]: time="2025-12-12T18:39:59.488565559Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9kb82,Uid:8c3a3740-82ee-4426-99af-2d870802441b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4933eee8e9aa6ef953a00d9bda5623b9b2d03902fd352c7b4a478a4b48dcf2cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.492609 kubelet[2702]: E1212 18:39:59.491392 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4933eee8e9aa6ef953a00d9bda5623b9b2d03902fd352c7b4a478a4b48dcf2cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.492609 kubelet[2702]: E1212 18:39:59.491469 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4933eee8e9aa6ef953a00d9bda5623b9b2d03902fd352c7b4a478a4b48dcf2cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9kb82" Dec 12 18:39:59.492609 kubelet[2702]: E1212 18:39:59.491497 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4933eee8e9aa6ef953a00d9bda5623b9b2d03902fd352c7b4a478a4b48dcf2cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9kb82" Dec 12 18:39:59.492861 kubelet[2702]: E1212 18:39:59.491607 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9kb82_kube-system(8c3a3740-82ee-4426-99af-2d870802441b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9kb82_kube-system(8c3a3740-82ee-4426-99af-2d870802441b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4933eee8e9aa6ef953a00d9bda5623b9b2d03902fd352c7b4a478a4b48dcf2cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9kb82" podUID="8c3a3740-82ee-4426-99af-2d870802441b" Dec 12 18:39:59.494520 containerd[1527]: time="2025-12-12T18:39:59.494367930Z" level=error msg="Failed to destroy network for sandbox \"dbecad1be2ab3e3563be915cea75985f07564457c825060b3c1942e541fdaaa8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.495601 containerd[1527]: time="2025-12-12T18:39:59.495489354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65df8ff88f-mf8qj,Uid:779cb8c2-ad5e-4f5f-95e6-a4da96b7be10,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbecad1be2ab3e3563be915cea75985f07564457c825060b3c1942e541fdaaa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.496174 kubelet[2702]: E1212 18:39:59.496131 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbecad1be2ab3e3563be915cea75985f07564457c825060b3c1942e541fdaaa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.496314 kubelet[2702]: E1212 18:39:59.496294 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbecad1be2ab3e3563be915cea75985f07564457c825060b3c1942e541fdaaa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65df8ff88f-mf8qj" Dec 12 18:39:59.496438 kubelet[2702]: E1212 18:39:59.496321 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbecad1be2ab3e3563be915cea75985f07564457c825060b3c1942e541fdaaa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65df8ff88f-mf8qj" Dec 12 18:39:59.496601 kubelet[2702]: E1212 18:39:59.496476 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65df8ff88f-mf8qj_calico-system(779cb8c2-ad5e-4f5f-95e6-a4da96b7be10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65df8ff88f-mf8qj_calico-system(779cb8c2-ad5e-4f5f-95e6-a4da96b7be10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbecad1be2ab3e3563be915cea75985f07564457c825060b3c1942e541fdaaa8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65df8ff88f-mf8qj" podUID="779cb8c2-ad5e-4f5f-95e6-a4da96b7be10" Dec 12 18:39:59.520388 containerd[1527]: time="2025-12-12T18:39:59.520238548Z" level=error msg="Failed to destroy network for sandbox \"ef699704a4a6880a180f4f639acfb2a39804d3da0513e56cd14789a2cd88d45a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.521640 containerd[1527]: time="2025-12-12T18:39:59.521584164Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb7c9749d-wbbnc,Uid:a92ba609-6740-41f9-b09b-6687ca1c0ede,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef699704a4a6880a180f4f639acfb2a39804d3da0513e56cd14789a2cd88d45a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.522136 kubelet[2702]: E1212 18:39:59.522095 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef699704a4a6880a180f4f639acfb2a39804d3da0513e56cd14789a2cd88d45a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:59.522233 kubelet[2702]: E1212 18:39:59.522171 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef699704a4a6880a180f4f639acfb2a39804d3da0513e56cd14789a2cd88d45a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cb7c9749d-wbbnc" Dec 12 18:39:59.522233 kubelet[2702]: E1212 18:39:59.522197 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef699704a4a6880a180f4f639acfb2a39804d3da0513e56cd14789a2cd88d45a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cb7c9749d-wbbnc" Dec 12 18:39:59.522600 kubelet[2702]: E1212 18:39:59.522282 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cb7c9749d-wbbnc_calico-apiserver(a92ba609-6740-41f9-b09b-6687ca1c0ede)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cb7c9749d-wbbnc_calico-apiserver(a92ba609-6740-41f9-b09b-6687ca1c0ede)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef699704a4a6880a180f4f639acfb2a39804d3da0513e56cd14789a2cd88d45a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cb7c9749d-wbbnc" podUID="a92ba609-6740-41f9-b09b-6687ca1c0ede" Dec 12 18:40:00.610177 systemd[1]: Created slice kubepods-besteffort-podefd42e2e_3a4f_425a_9c07_184c94bcbd7e.slice - libcontainer container kubepods-besteffort-podefd42e2e_3a4f_425a_9c07_184c94bcbd7e.slice. Dec 12 18:40:00.620081 containerd[1527]: time="2025-12-12T18:40:00.619773732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2flfr,Uid:efd42e2e-3a4f-425a-9c07-184c94bcbd7e,Namespace:calico-system,Attempt:0,}" Dec 12 18:40:00.757702 containerd[1527]: time="2025-12-12T18:40:00.757650834Z" level=error msg="Failed to destroy network for sandbox \"1c29ae3d0e019c844053390931e5550ab240a62a7ebf3ed95812d80c49747515\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:40:00.760058 containerd[1527]: time="2025-12-12T18:40:00.759349088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2flfr,Uid:efd42e2e-3a4f-425a-9c07-184c94bcbd7e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c29ae3d0e019c844053390931e5550ab240a62a7ebf3ed95812d80c49747515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:40:00.761879 systemd[1]: run-netns-cni\x2d18339c75\x2da1e9\x2d5499\x2d1a0f\x2d9b87c6707418.mount: Deactivated successfully. Dec 12 18:40:00.763314 kubelet[2702]: E1212 18:40:00.762516 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c29ae3d0e019c844053390931e5550ab240a62a7ebf3ed95812d80c49747515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:40:00.763314 kubelet[2702]: E1212 18:40:00.762608 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c29ae3d0e019c844053390931e5550ab240a62a7ebf3ed95812d80c49747515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2flfr" Dec 12 18:40:00.763314 kubelet[2702]: E1212 18:40:00.762638 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c29ae3d0e019c844053390931e5550ab240a62a7ebf3ed95812d80c49747515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2flfr" Dec 12 18:40:00.765000 kubelet[2702]: E1212 18:40:00.763627 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2flfr_calico-system(efd42e2e-3a4f-425a-9c07-184c94bcbd7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2flfr_calico-system(efd42e2e-3a4f-425a-9c07-184c94bcbd7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c29ae3d0e019c844053390931e5550ab240a62a7ebf3ed95812d80c49747515\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2flfr" podUID="efd42e2e-3a4f-425a-9c07-184c94bcbd7e" Dec 12 18:40:06.065980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2313533027.mount: Deactivated successfully. Dec 12 18:40:06.112196 containerd[1527]: time="2025-12-12T18:40:06.112124204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:40:06.118061 containerd[1527]: time="2025-12-12T18:40:06.116305392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 12 18:40:06.118061 containerd[1527]: time="2025-12-12T18:40:06.116482124Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:40:06.119753 containerd[1527]: time="2025-12-12T18:40:06.119703306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:40:06.120794 containerd[1527]: time="2025-12-12T18:40:06.120743657Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.321279085s" Dec 12 18:40:06.121000 containerd[1527]: time="2025-12-12T18:40:06.120972378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 12 18:40:06.149104 containerd[1527]: time="2025-12-12T18:40:06.148244025Z" level=info msg="CreateContainer within sandbox \"b18e3921a8a1a88a09d84a392d90c662d21accd20ab5a736fd4e1bcd28d04f4a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 18:40:06.165833 containerd[1527]: time="2025-12-12T18:40:06.165778620Z" level=info msg="Container 55ec07ccdd510df862f271ee1011b527366acbf966576381a776985ea490dad9: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:40:06.177167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1940831454.mount: Deactivated successfully. Dec 12 18:40:06.190515 containerd[1527]: time="2025-12-12T18:40:06.190417621Z" level=info msg="CreateContainer within sandbox \"b18e3921a8a1a88a09d84a392d90c662d21accd20ab5a736fd4e1bcd28d04f4a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"55ec07ccdd510df862f271ee1011b527366acbf966576381a776985ea490dad9\"" Dec 12 18:40:06.191363 containerd[1527]: time="2025-12-12T18:40:06.191308734Z" level=info msg="StartContainer for \"55ec07ccdd510df862f271ee1011b527366acbf966576381a776985ea490dad9\"" Dec 12 18:40:06.201959 containerd[1527]: time="2025-12-12T18:40:06.201850988Z" level=info msg="connecting to shim 55ec07ccdd510df862f271ee1011b527366acbf966576381a776985ea490dad9" address="unix:///run/containerd/s/640495d58fce1ed7f13a6110be4f9cf6bd01099f0536a16d8590dcc82c2625ab" protocol=ttrpc version=3 Dec 12 18:40:06.429392 systemd[1]: Started cri-containerd-55ec07ccdd510df862f271ee1011b527366acbf966576381a776985ea490dad9.scope - libcontainer container 55ec07ccdd510df862f271ee1011b527366acbf966576381a776985ea490dad9. Dec 12 18:40:06.575911 containerd[1527]: time="2025-12-12T18:40:06.575850270Z" level=info msg="StartContainer for \"55ec07ccdd510df862f271ee1011b527366acbf966576381a776985ea490dad9\" returns successfully" Dec 12 18:40:06.714092 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 18:40:06.714645 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 18:40:06.855636 kubelet[2702]: E1212 18:40:06.855561 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:06.908397 kubelet[2702]: I1212 18:40:06.907914 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gghq2" podStartSLOduration=3.722225862 podStartE2EDuration="20.907887128s" podCreationTimestamp="2025-12-12 18:39:46 +0000 UTC" firstStartedPulling="2025-12-12 18:39:48.936542786 +0000 UTC m=+27.526205956" lastFinishedPulling="2025-12-12 18:40:06.122204058 +0000 UTC m=+44.711867222" observedRunningTime="2025-12-12 18:40:06.903816362 +0000 UTC m=+45.493479546" watchObservedRunningTime="2025-12-12 18:40:06.907887128 +0000 UTC m=+45.497550344" Dec 12 18:40:07.128225 kubelet[2702]: I1212 18:40:07.127971 2702 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:40:07.130573 kubelet[2702]: E1212 18:40:07.129900 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:07.181116 kubelet[2702]: I1212 18:40:07.180446 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt4mc\" (UniqueName: \"kubernetes.io/projected/ce317868-d60c-4160-8f9f-88083979fb20-kube-api-access-lt4mc\") pod \"ce317868-d60c-4160-8f9f-88083979fb20\" (UID: \"ce317868-d60c-4160-8f9f-88083979fb20\") " Dec 12 18:40:07.181116 kubelet[2702]: I1212 18:40:07.180516 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce317868-d60c-4160-8f9f-88083979fb20-whisker-ca-bundle\") pod \"ce317868-d60c-4160-8f9f-88083979fb20\" (UID: \"ce317868-d60c-4160-8f9f-88083979fb20\") " Dec 12 18:40:07.181116 kubelet[2702]: I1212 18:40:07.180556 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce317868-d60c-4160-8f9f-88083979fb20-whisker-backend-key-pair\") pod \"ce317868-d60c-4160-8f9f-88083979fb20\" (UID: \"ce317868-d60c-4160-8f9f-88083979fb20\") " Dec 12 18:40:07.192527 kubelet[2702]: I1212 18:40:07.187738 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce317868-d60c-4160-8f9f-88083979fb20-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ce317868-d60c-4160-8f9f-88083979fb20" (UID: "ce317868-d60c-4160-8f9f-88083979fb20"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:40:07.192245 systemd[1]: var-lib-kubelet-pods-ce317868\x2dd60c\x2d4160\x2d8f9f\x2d88083979fb20-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlt4mc.mount: Deactivated successfully. Dec 12 18:40:07.210134 systemd[1]: var-lib-kubelet-pods-ce317868\x2dd60c\x2d4160\x2d8f9f\x2d88083979fb20-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 18:40:07.232246 kubelet[2702]: I1212 18:40:07.201187 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce317868-d60c-4160-8f9f-88083979fb20-kube-api-access-lt4mc" (OuterVolumeSpecName: "kube-api-access-lt4mc") pod "ce317868-d60c-4160-8f9f-88083979fb20" (UID: "ce317868-d60c-4160-8f9f-88083979fb20"). InnerVolumeSpecName "kube-api-access-lt4mc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:40:07.232246 kubelet[2702]: I1212 18:40:07.216615 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce317868-d60c-4160-8f9f-88083979fb20-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ce317868-d60c-4160-8f9f-88083979fb20" (UID: "ce317868-d60c-4160-8f9f-88083979fb20"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 18:40:07.281727 kubelet[2702]: I1212 18:40:07.281556 2702 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lt4mc\" (UniqueName: \"kubernetes.io/projected/ce317868-d60c-4160-8f9f-88083979fb20-kube-api-access-lt4mc\") on node \"ci-4459.2.2-f-e155308a0b\" DevicePath \"\"" Dec 12 18:40:07.281727 kubelet[2702]: I1212 18:40:07.281644 2702 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce317868-d60c-4160-8f9f-88083979fb20-whisker-ca-bundle\") on node \"ci-4459.2.2-f-e155308a0b\" DevicePath \"\"" Dec 12 18:40:07.281727 kubelet[2702]: I1212 18:40:07.281666 2702 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce317868-d60c-4160-8f9f-88083979fb20-whisker-backend-key-pair\") on node \"ci-4459.2.2-f-e155308a0b\" DevicePath \"\"" Dec 12 18:40:07.603344 systemd[1]: Removed slice kubepods-besteffort-podce317868_d60c_4160_8f9f_88083979fb20.slice - libcontainer container kubepods-besteffort-podce317868_d60c_4160_8f9f_88083979fb20.slice. Dec 12 18:40:07.858555 kubelet[2702]: E1212 18:40:07.858389 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:07.860673 kubelet[2702]: E1212 18:40:07.860544 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:07.958984 systemd[1]: Created slice kubepods-besteffort-podb68d5b78_37a9_4c51_a50e_f394db8ab487.slice - libcontainer container kubepods-besteffort-podb68d5b78_37a9_4c51_a50e_f394db8ab487.slice. Dec 12 18:40:08.090088 kubelet[2702]: I1212 18:40:08.089990 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b68d5b78-37a9-4c51-a50e-f394db8ab487-whisker-backend-key-pair\") pod \"whisker-686df6b99b-m5p6t\" (UID: \"b68d5b78-37a9-4c51-a50e-f394db8ab487\") " pod="calico-system/whisker-686df6b99b-m5p6t" Dec 12 18:40:08.090408 kubelet[2702]: I1212 18:40:08.090110 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b68d5b78-37a9-4c51-a50e-f394db8ab487-whisker-ca-bundle\") pod \"whisker-686df6b99b-m5p6t\" (UID: \"b68d5b78-37a9-4c51-a50e-f394db8ab487\") " pod="calico-system/whisker-686df6b99b-m5p6t" Dec 12 18:40:08.090408 kubelet[2702]: I1212 18:40:08.090185 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jw2x\" (UniqueName: \"kubernetes.io/projected/b68d5b78-37a9-4c51-a50e-f394db8ab487-kube-api-access-8jw2x\") pod \"whisker-686df6b99b-m5p6t\" (UID: \"b68d5b78-37a9-4c51-a50e-f394db8ab487\") " pod="calico-system/whisker-686df6b99b-m5p6t" Dec 12 18:40:08.267889 containerd[1527]: time="2025-12-12T18:40:08.267698578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-686df6b99b-m5p6t,Uid:b68d5b78-37a9-4c51-a50e-f394db8ab487,Namespace:calico-system,Attempt:0,}" Dec 12 18:40:08.769508 systemd-networkd[1437]: cali21c27bf7324: Link UP Dec 12 18:40:08.773425 systemd-networkd[1437]: cali21c27bf7324: Gained carrier Dec 12 18:40:08.826180 containerd[1527]: 2025-12-12 18:40:08.373 [INFO][3879] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:40:08.826180 containerd[1527]: 2025-12-12 18:40:08.412 [INFO][3879] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--f--e155308a0b-k8s-whisker--686df6b99b--m5p6t-eth0 whisker-686df6b99b- calico-system b68d5b78-37a9-4c51-a50e-f394db8ab487 970 0 2025-12-12 18:40:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:686df6b99b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.2-f-e155308a0b whisker-686df6b99b-m5p6t eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali21c27bf7324 [] [] }} ContainerID="31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7" Namespace="calico-system" Pod="whisker-686df6b99b-m5p6t" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-whisker--686df6b99b--m5p6t-" Dec 12 18:40:08.826180 containerd[1527]: 2025-12-12 18:40:08.412 [INFO][3879] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7" Namespace="calico-system" Pod="whisker-686df6b99b-m5p6t" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-whisker--686df6b99b--m5p6t-eth0" Dec 12 18:40:08.826180 containerd[1527]: 2025-12-12 18:40:08.612 [INFO][3892] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7" HandleID="k8s-pod-network.31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7" Workload="ci--4459.2.2--f--e155308a0b-k8s-whisker--686df6b99b--m5p6t-eth0" Dec 12 18:40:08.826654 containerd[1527]: 2025-12-12 18:40:08.615 [INFO][3892] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7" HandleID="k8s-pod-network.31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7" Workload="ci--4459.2.2--f--e155308a0b-k8s-whisker--686df6b99b--m5p6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000606260), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-f-e155308a0b", "pod":"whisker-686df6b99b-m5p6t", "timestamp":"2025-12-12 18:40:08.612972615 +0000 UTC"}, Hostname:"ci-4459.2.2-f-e155308a0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:40:08.826654 containerd[1527]: 2025-12-12 18:40:08.615 [INFO][3892] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:40:08.826654 containerd[1527]: 2025-12-12 18:40:08.616 [INFO][3892] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:40:08.826654 containerd[1527]: 2025-12-12 18:40:08.617 [INFO][3892] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-f-e155308a0b' Dec 12 18:40:08.826654 containerd[1527]: 2025-12-12 18:40:08.663 [INFO][3892] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:08.826654 containerd[1527]: 2025-12-12 18:40:08.702 [INFO][3892] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:08.826654 containerd[1527]: 2025-12-12 18:40:08.714 [INFO][3892] ipam/ipam.go 511: Trying affinity for 192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:08.826654 containerd[1527]: 2025-12-12 18:40:08.718 [INFO][3892] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:08.826654 containerd[1527]: 2025-12-12 18:40:08.724 [INFO][3892] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:08.826902 containerd[1527]: 2025-12-12 18:40:08.724 [INFO][3892] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:08.826902 containerd[1527]: 2025-12-12 18:40:08.727 [INFO][3892] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7 Dec 12 18:40:08.826902 containerd[1527]: 2025-12-12 18:40:08.735 [INFO][3892] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:08.826902 containerd[1527]: 2025-12-12 18:40:08.744 [INFO][3892] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.193/26] block=192.168.3.192/26 handle="k8s-pod-network.31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:08.826902 containerd[1527]: 2025-12-12 18:40:08.744 [INFO][3892] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.193/26] handle="k8s-pod-network.31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:08.826902 containerd[1527]: 2025-12-12 18:40:08.744 [INFO][3892] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:40:08.826902 containerd[1527]: 2025-12-12 18:40:08.744 [INFO][3892] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.193/26] IPv6=[] ContainerID="31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7" HandleID="k8s-pod-network.31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7" Workload="ci--4459.2.2--f--e155308a0b-k8s-whisker--686df6b99b--m5p6t-eth0" Dec 12 18:40:08.827769 containerd[1527]: 2025-12-12 18:40:08.749 [INFO][3879] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7" Namespace="calico-system" Pod="whisker-686df6b99b-m5p6t" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-whisker--686df6b99b--m5p6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--f--e155308a0b-k8s-whisker--686df6b99b--m5p6t-eth0", GenerateName:"whisker-686df6b99b-", Namespace:"calico-system", SelfLink:"", UID:"b68d5b78-37a9-4c51-a50e-f394db8ab487", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 40, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"686df6b99b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-f-e155308a0b", ContainerID:"", Pod:"whisker-686df6b99b-m5p6t", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.3.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali21c27bf7324", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:40:08.827769 containerd[1527]: 2025-12-12 18:40:08.750 [INFO][3879] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.193/32] ContainerID="31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7" Namespace="calico-system" Pod="whisker-686df6b99b-m5p6t" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-whisker--686df6b99b--m5p6t-eth0" Dec 12 18:40:08.830347 containerd[1527]: 2025-12-12 18:40:08.750 [INFO][3879] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21c27bf7324 ContainerID="31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7" Namespace="calico-system" Pod="whisker-686df6b99b-m5p6t" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-whisker--686df6b99b--m5p6t-eth0" Dec 12 18:40:08.830347 containerd[1527]: 2025-12-12 18:40:08.774 [INFO][3879] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7" Namespace="calico-system" Pod="whisker-686df6b99b-m5p6t" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-whisker--686df6b99b--m5p6t-eth0" Dec 12 18:40:08.830454 containerd[1527]: 2025-12-12 18:40:08.777 [INFO][3879] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7" Namespace="calico-system" Pod="whisker-686df6b99b-m5p6t" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-whisker--686df6b99b--m5p6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--f--e155308a0b-k8s-whisker--686df6b99b--m5p6t-eth0", GenerateName:"whisker-686df6b99b-", Namespace:"calico-system", SelfLink:"", UID:"b68d5b78-37a9-4c51-a50e-f394db8ab487", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 40, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"686df6b99b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-f-e155308a0b", ContainerID:"31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7", Pod:"whisker-686df6b99b-m5p6t", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.3.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali21c27bf7324", MAC:"c6:60:2c:82:39:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:40:08.830602 containerd[1527]: 2025-12-12 18:40:08.817 [INFO][3879] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7" Namespace="calico-system" Pod="whisker-686df6b99b-m5p6t" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-whisker--686df6b99b--m5p6t-eth0" Dec 12 18:40:08.941735 containerd[1527]: time="2025-12-12T18:40:08.941590804Z" level=info msg="connecting to shim 31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7" address="unix:///run/containerd/s/2a78a7f7acb1c60f1c61833bd7add372b04c9309a0bf43bde181e333d0864ec4" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:40:09.031377 systemd[1]: Started cri-containerd-31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7.scope - libcontainer container 31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7. Dec 12 18:40:09.146926 containerd[1527]: time="2025-12-12T18:40:09.146868795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-686df6b99b-m5p6t,Uid:b68d5b78-37a9-4c51-a50e-f394db8ab487,Namespace:calico-system,Attempt:0,} returns sandbox id \"31f7934e97be99abbc818a1a1b0cb64d75c8467ac393fc7bf300658fd3a788b7\"" Dec 12 18:40:09.153053 containerd[1527]: time="2025-12-12T18:40:09.152318279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:40:09.495256 containerd[1527]: time="2025-12-12T18:40:09.495089676Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:09.498063 containerd[1527]: time="2025-12-12T18:40:09.497751304Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:40:09.498063 containerd[1527]: time="2025-12-12T18:40:09.497791363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:40:09.498969 kubelet[2702]: E1212 18:40:09.498634 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:40:09.498969 kubelet[2702]: E1212 18:40:09.498737 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:40:09.507401 kubelet[2702]: E1212 18:40:09.507344 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b5abad96921f40f3a1fb0a13899f51dc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8jw2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-686df6b99b-m5p6t_calico-system(b68d5b78-37a9-4c51-a50e-f394db8ab487): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:09.510156 containerd[1527]: time="2025-12-12T18:40:09.510114855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:40:09.594350 containerd[1527]: time="2025-12-12T18:40:09.593102921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb7c9749d-j4f49,Uid:4b3492e4-bb68-4901-abef-c77777942b00,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:40:09.603055 kubelet[2702]: I1212 18:40:09.600779 2702 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce317868-d60c-4160-8f9f-88083979fb20" path="/var/lib/kubelet/pods/ce317868-d60c-4160-8f9f-88083979fb20/volumes" Dec 12 18:40:09.802730 systemd-networkd[1437]: calida8e236c596: Link UP Dec 12 18:40:09.804340 systemd-networkd[1437]: calida8e236c596: Gained carrier Dec 12 18:40:09.840829 containerd[1527]: 2025-12-12 18:40:09.659 [INFO][4076] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--j4f49-eth0 calico-apiserver-7cb7c9749d- calico-apiserver 4b3492e4-bb68-4901-abef-c77777942b00 891 0 2025-12-12 18:39:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cb7c9749d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-f-e155308a0b calico-apiserver-7cb7c9749d-j4f49 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calida8e236c596 [] [] }} ContainerID="9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7c9749d-j4f49" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--j4f49-" Dec 12 18:40:09.840829 containerd[1527]: 2025-12-12 18:40:09.660 [INFO][4076] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7c9749d-j4f49" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--j4f49-eth0" Dec 12 18:40:09.840829 containerd[1527]: 2025-12-12 18:40:09.718 [INFO][4088] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137" HandleID="k8s-pod-network.9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137" Workload="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--j4f49-eth0" Dec 12 18:40:09.841561 containerd[1527]: 2025-12-12 18:40:09.718 [INFO][4088] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137" HandleID="k8s-pod-network.9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137" Workload="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--j4f49-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-f-e155308a0b", "pod":"calico-apiserver-7cb7c9749d-j4f49", "timestamp":"2025-12-12 18:40:09.718377056 +0000 UTC"}, Hostname:"ci-4459.2.2-f-e155308a0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:40:09.841561 containerd[1527]: 2025-12-12 18:40:09.718 [INFO][4088] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:40:09.841561 containerd[1527]: 2025-12-12 18:40:09.718 [INFO][4088] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:40:09.841561 containerd[1527]: 2025-12-12 18:40:09.718 [INFO][4088] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-f-e155308a0b' Dec 12 18:40:09.841561 containerd[1527]: 2025-12-12 18:40:09.727 [INFO][4088] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:09.841561 containerd[1527]: 2025-12-12 18:40:09.735 [INFO][4088] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:09.841561 containerd[1527]: 2025-12-12 18:40:09.743 [INFO][4088] ipam/ipam.go 511: Trying affinity for 192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:09.841561 containerd[1527]: 2025-12-12 18:40:09.749 [INFO][4088] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:09.841561 containerd[1527]: 2025-12-12 18:40:09.754 [INFO][4088] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:09.842348 containerd[1527]: 2025-12-12 18:40:09.754 [INFO][4088] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:09.842348 containerd[1527]: 2025-12-12 18:40:09.759 [INFO][4088] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137 Dec 12 18:40:09.842348 containerd[1527]: 2025-12-12 18:40:09.772 [INFO][4088] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:09.842348 containerd[1527]: 2025-12-12 18:40:09.788 [INFO][4088] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.194/26] block=192.168.3.192/26 handle="k8s-pod-network.9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:09.842348 containerd[1527]: 2025-12-12 18:40:09.788 [INFO][4088] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.194/26] handle="k8s-pod-network.9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:09.842348 containerd[1527]: 2025-12-12 18:40:09.788 [INFO][4088] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:40:09.842348 containerd[1527]: 2025-12-12 18:40:09.788 [INFO][4088] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.194/26] IPv6=[] ContainerID="9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137" HandleID="k8s-pod-network.9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137" Workload="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--j4f49-eth0" Dec 12 18:40:09.842878 containerd[1527]: 2025-12-12 18:40:09.797 [INFO][4076] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7c9749d-j4f49" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--j4f49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--j4f49-eth0", GenerateName:"calico-apiserver-7cb7c9749d-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b3492e4-bb68-4901-abef-c77777942b00", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cb7c9749d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-f-e155308a0b", ContainerID:"", Pod:"calico-apiserver-7cb7c9749d-j4f49", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calida8e236c596", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:40:09.842972 containerd[1527]: 2025-12-12 18:40:09.798 [INFO][4076] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.194/32] ContainerID="9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7c9749d-j4f49" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--j4f49-eth0" Dec 12 18:40:09.842972 containerd[1527]: 2025-12-12 18:40:09.798 [INFO][4076] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida8e236c596 ContainerID="9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7c9749d-j4f49" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--j4f49-eth0" Dec 12 18:40:09.842972 containerd[1527]: 2025-12-12 18:40:09.805 [INFO][4076] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7c9749d-j4f49" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--j4f49-eth0" Dec 12 18:40:09.843496 containerd[1527]: 2025-12-12 18:40:09.807 [INFO][4076] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7c9749d-j4f49" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--j4f49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--j4f49-eth0", GenerateName:"calico-apiserver-7cb7c9749d-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b3492e4-bb68-4901-abef-c77777942b00", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cb7c9749d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-f-e155308a0b", ContainerID:"9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137", Pod:"calico-apiserver-7cb7c9749d-j4f49", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calida8e236c596", MAC:"22:b4:5c:b0:d5:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:40:09.843645 containerd[1527]: 2025-12-12 18:40:09.834 [INFO][4076] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7c9749d-j4f49" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--j4f49-eth0" Dec 12 18:40:09.875396 containerd[1527]: time="2025-12-12T18:40:09.875100487Z" level=info msg="connecting to shim 9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137" address="unix:///run/containerd/s/c10c1fa6e108d46e9b6206ece5fcaf1071f5535c7777756b6dc705eb70a6758c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:40:09.879171 containerd[1527]: time="2025-12-12T18:40:09.879129498Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:09.880149 containerd[1527]: time="2025-12-12T18:40:09.880057237Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:40:09.880383 containerd[1527]: time="2025-12-12T18:40:09.880334731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:40:09.881084 kubelet[2702]: E1212 18:40:09.880741 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:40:09.881084 kubelet[2702]: E1212 18:40:09.880826 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:40:09.882198 kubelet[2702]: E1212 18:40:09.880986 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jw2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-686df6b99b-m5p6t_calico-system(b68d5b78-37a9-4c51-a50e-f394db8ab487): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:09.884466 kubelet[2702]: E1212 18:40:09.884394 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-686df6b99b-m5p6t" podUID="b68d5b78-37a9-4c51-a50e-f394db8ab487" Dec 12 18:40:09.920278 systemd[1]: Started cri-containerd-9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137.scope - libcontainer container 9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137. Dec 12 18:40:10.019999 containerd[1527]: time="2025-12-12T18:40:10.019578809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb7c9749d-j4f49,Uid:4b3492e4-bb68-4901-abef-c77777942b00,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9b37aceb9402c7b1be55d037d1116f520117cb37d6a0fbe34b24ed25d780c137\"" Dec 12 18:40:10.025422 containerd[1527]: time="2025-12-12T18:40:10.025363714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:40:10.045622 systemd-networkd[1437]: vxlan.calico: Link UP Dec 12 18:40:10.045633 systemd-networkd[1437]: vxlan.calico: Gained carrier Dec 12 18:40:10.426042 containerd[1527]: time="2025-12-12T18:40:10.425933947Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:10.427173 containerd[1527]: time="2025-12-12T18:40:10.427040793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:40:10.427173 containerd[1527]: time="2025-12-12T18:40:10.427089660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:40:10.427702 kubelet[2702]: E1212 18:40:10.427604 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:40:10.427702 kubelet[2702]: E1212 18:40:10.427670 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:40:10.428558 kubelet[2702]: E1212 18:40:10.428469 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pv5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cb7c9749d-j4f49_calico-apiserver(4b3492e4-bb68-4901-abef-c77777942b00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:10.429779 kubelet[2702]: E1212 18:40:10.429698 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7c9749d-j4f49" podUID="4b3492e4-bb68-4901-abef-c77777942b00" Dec 12 18:40:10.593675 containerd[1527]: time="2025-12-12T18:40:10.593611548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-6s6gc,Uid:b0d854b1-0e79-4d9c-914e-0384874af8c1,Namespace:calico-system,Attempt:0,}" Dec 12 18:40:10.749565 systemd-networkd[1437]: cali21c27bf7324: Gained IPv6LL Dec 12 18:40:10.833418 systemd-networkd[1437]: cali595e2d405de: Link UP Dec 12 18:40:10.834609 systemd-networkd[1437]: cali595e2d405de: Gained carrier Dec 12 18:40:10.862286 containerd[1527]: 2025-12-12 18:40:10.678 [INFO][4214] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--f--e155308a0b-k8s-goldmane--666569f655--6s6gc-eth0 goldmane-666569f655- calico-system b0d854b1-0e79-4d9c-914e-0384874af8c1 885 0 2025-12-12 18:39:44 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.2-f-e155308a0b goldmane-666569f655-6s6gc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali595e2d405de [] [] }} ContainerID="375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3" Namespace="calico-system" Pod="goldmane-666569f655-6s6gc" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-goldmane--666569f655--6s6gc-" Dec 12 18:40:10.862286 containerd[1527]: 2025-12-12 18:40:10.679 [INFO][4214] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3" Namespace="calico-system" Pod="goldmane-666569f655-6s6gc" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-goldmane--666569f655--6s6gc-eth0" Dec 12 18:40:10.862286 containerd[1527]: 2025-12-12 18:40:10.743 [INFO][4230] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3" HandleID="k8s-pod-network.375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3" Workload="ci--4459.2.2--f--e155308a0b-k8s-goldmane--666569f655--6s6gc-eth0" Dec 12 18:40:10.863082 containerd[1527]: 2025-12-12 18:40:10.744 [INFO][4230] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3" HandleID="k8s-pod-network.375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3" Workload="ci--4459.2.2--f--e155308a0b-k8s-goldmane--666569f655--6s6gc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-f-e155308a0b", "pod":"goldmane-666569f655-6s6gc", "timestamp":"2025-12-12 18:40:10.743740612 +0000 UTC"}, Hostname:"ci-4459.2.2-f-e155308a0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:40:10.863082 containerd[1527]: 2025-12-12 18:40:10.744 [INFO][4230] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:40:10.863082 containerd[1527]: 2025-12-12 18:40:10.744 [INFO][4230] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:40:10.863082 containerd[1527]: 2025-12-12 18:40:10.744 [INFO][4230] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-f-e155308a0b' Dec 12 18:40:10.863082 containerd[1527]: 2025-12-12 18:40:10.757 [INFO][4230] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:10.863082 containerd[1527]: 2025-12-12 18:40:10.770 [INFO][4230] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:10.863082 containerd[1527]: 2025-12-12 18:40:10.781 [INFO][4230] ipam/ipam.go 511: Trying affinity for 192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:10.863082 containerd[1527]: 2025-12-12 18:40:10.785 [INFO][4230] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:10.863082 containerd[1527]: 2025-12-12 18:40:10.796 [INFO][4230] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:10.864121 containerd[1527]: 2025-12-12 18:40:10.796 [INFO][4230] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:10.864121 containerd[1527]: 2025-12-12 18:40:10.799 [INFO][4230] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3 Dec 12 18:40:10.864121 containerd[1527]: 2025-12-12 18:40:10.808 [INFO][4230] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:10.864121 containerd[1527]: 2025-12-12 18:40:10.819 [INFO][4230] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.195/26] block=192.168.3.192/26 handle="k8s-pod-network.375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:10.864121 containerd[1527]: 2025-12-12 18:40:10.819 [INFO][4230] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.195/26] handle="k8s-pod-network.375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:10.864121 containerd[1527]: 2025-12-12 18:40:10.819 [INFO][4230] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:40:10.864121 containerd[1527]: 2025-12-12 18:40:10.819 [INFO][4230] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.195/26] IPv6=[] ContainerID="375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3" HandleID="k8s-pod-network.375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3" Workload="ci--4459.2.2--f--e155308a0b-k8s-goldmane--666569f655--6s6gc-eth0" Dec 12 18:40:10.864372 containerd[1527]: 2025-12-12 18:40:10.826 [INFO][4214] cni-plugin/k8s.go 418: Populated endpoint ContainerID="375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3" Namespace="calico-system" Pod="goldmane-666569f655-6s6gc" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-goldmane--666569f655--6s6gc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--f--e155308a0b-k8s-goldmane--666569f655--6s6gc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b0d854b1-0e79-4d9c-914e-0384874af8c1", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-f-e155308a0b", ContainerID:"", Pod:"goldmane-666569f655-6s6gc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.3.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali595e2d405de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:40:10.864502 containerd[1527]: 2025-12-12 18:40:10.826 [INFO][4214] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.195/32] ContainerID="375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3" Namespace="calico-system" Pod="goldmane-666569f655-6s6gc" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-goldmane--666569f655--6s6gc-eth0" Dec 12 18:40:10.864502 containerd[1527]: 2025-12-12 18:40:10.826 [INFO][4214] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali595e2d405de ContainerID="375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3" Namespace="calico-system" Pod="goldmane-666569f655-6s6gc" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-goldmane--666569f655--6s6gc-eth0" Dec 12 18:40:10.864502 containerd[1527]: 2025-12-12 18:40:10.834 [INFO][4214] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3" Namespace="calico-system" Pod="goldmane-666569f655-6s6gc" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-goldmane--666569f655--6s6gc-eth0" Dec 12 18:40:10.864616 containerd[1527]: 2025-12-12 18:40:10.837 [INFO][4214] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3" Namespace="calico-system" Pod="goldmane-666569f655-6s6gc" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-goldmane--666569f655--6s6gc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--f--e155308a0b-k8s-goldmane--666569f655--6s6gc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b0d854b1-0e79-4d9c-914e-0384874af8c1", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-f-e155308a0b", ContainerID:"375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3", Pod:"goldmane-666569f655-6s6gc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.3.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali595e2d405de", MAC:"4a:7f:b4:d1:3f:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:40:10.864722 containerd[1527]: 2025-12-12 18:40:10.857 [INFO][4214] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3" Namespace="calico-system" Pod="goldmane-666569f655-6s6gc" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-goldmane--666569f655--6s6gc-eth0" Dec 12 18:40:10.878166 systemd-networkd[1437]: calida8e236c596: Gained IPv6LL Dec 12 18:40:10.902696 kubelet[2702]: E1212 18:40:10.902630 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7c9749d-j4f49" podUID="4b3492e4-bb68-4901-abef-c77777942b00" Dec 12 18:40:10.904913 kubelet[2702]: E1212 18:40:10.903434 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-686df6b99b-m5p6t" podUID="b68d5b78-37a9-4c51-a50e-f394db8ab487" Dec 12 18:40:10.927582 containerd[1527]: time="2025-12-12T18:40:10.927509542Z" level=info msg="connecting to shim 375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3" address="unix:///run/containerd/s/c4b5e26b85fdc11cb3762a35cce1fc8caaffc5406d376fd7f490524a729e4e56" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:40:11.001720 systemd[1]: Started cri-containerd-375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3.scope - libcontainer container 375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3. Dec 12 18:40:11.159245 containerd[1527]: time="2025-12-12T18:40:11.159189335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-6s6gc,Uid:b0d854b1-0e79-4d9c-914e-0384874af8c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"375695ea1821d47be212f2d4da750ad3f4a8da303627854e4f75040670438ec3\"" Dec 12 18:40:11.164579 containerd[1527]: time="2025-12-12T18:40:11.164458970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:40:11.391427 systemd-networkd[1437]: vxlan.calico: Gained IPv6LL Dec 12 18:40:11.516250 containerd[1527]: time="2025-12-12T18:40:11.516207075Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:11.517346 containerd[1527]: time="2025-12-12T18:40:11.517298553Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:40:11.517687 containerd[1527]: time="2025-12-12T18:40:11.517366879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:40:11.518244 kubelet[2702]: E1212 18:40:11.517910 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:40:11.518244 kubelet[2702]: E1212 18:40:11.517970 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:40:11.518244 kubelet[2702]: E1212 18:40:11.518157 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cwmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-6s6gc_calico-system(b0d854b1-0e79-4d9c-914e-0384874af8c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:11.519695 kubelet[2702]: E1212 18:40:11.519641 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6s6gc" podUID="b0d854b1-0e79-4d9c-914e-0384874af8c1" Dec 12 18:40:11.592403 containerd[1527]: time="2025-12-12T18:40:11.592350057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65df8ff88f-mf8qj,Uid:779cb8c2-ad5e-4f5f-95e6-a4da96b7be10,Namespace:calico-system,Attempt:0,}" Dec 12 18:40:11.593064 containerd[1527]: time="2025-12-12T18:40:11.592872571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb7c9749d-wbbnc,Uid:a92ba609-6740-41f9-b09b-6687ca1c0ede,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:40:11.819711 systemd-networkd[1437]: cali09e17b9bfff: Link UP Dec 12 18:40:11.822396 systemd-networkd[1437]: cali09e17b9bfff: Gained carrier Dec 12 18:40:11.848890 containerd[1527]: 2025-12-12 18:40:11.662 [INFO][4307] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--wbbnc-eth0 calico-apiserver-7cb7c9749d- calico-apiserver a92ba609-6740-41f9-b09b-6687ca1c0ede 878 0 2025-12-12 18:39:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cb7c9749d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-f-e155308a0b calico-apiserver-7cb7c9749d-wbbnc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali09e17b9bfff [] [] }} ContainerID="f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7c9749d-wbbnc" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--wbbnc-" Dec 12 18:40:11.848890 containerd[1527]: 2025-12-12 18:40:11.663 [INFO][4307] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7c9749d-wbbnc" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--wbbnc-eth0" Dec 12 18:40:11.848890 containerd[1527]: 2025-12-12 18:40:11.720 [INFO][4325] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2" HandleID="k8s-pod-network.f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2" Workload="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--wbbnc-eth0" Dec 12 18:40:11.849922 containerd[1527]: 2025-12-12 18:40:11.720 [INFO][4325] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2" HandleID="k8s-pod-network.f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2" Workload="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--wbbnc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5dc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-f-e155308a0b", "pod":"calico-apiserver-7cb7c9749d-wbbnc", "timestamp":"2025-12-12 18:40:11.720280806 +0000 UTC"}, Hostname:"ci-4459.2.2-f-e155308a0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:40:11.849922 containerd[1527]: 2025-12-12 18:40:11.720 [INFO][4325] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:40:11.849922 containerd[1527]: 2025-12-12 18:40:11.720 [INFO][4325] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:40:11.849922 containerd[1527]: 2025-12-12 18:40:11.720 [INFO][4325] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-f-e155308a0b' Dec 12 18:40:11.849922 containerd[1527]: 2025-12-12 18:40:11.739 [INFO][4325] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:11.849922 containerd[1527]: 2025-12-12 18:40:11.750 [INFO][4325] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:11.849922 containerd[1527]: 2025-12-12 18:40:11.760 [INFO][4325] ipam/ipam.go 511: Trying affinity for 192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:11.849922 containerd[1527]: 2025-12-12 18:40:11.763 [INFO][4325] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:11.849922 containerd[1527]: 2025-12-12 18:40:11.768 [INFO][4325] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:11.853494 containerd[1527]: 2025-12-12 18:40:11.768 [INFO][4325] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:11.853494 containerd[1527]: 2025-12-12 18:40:11.773 [INFO][4325] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2 Dec 12 18:40:11.853494 containerd[1527]: 2025-12-12 18:40:11.784 [INFO][4325] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:11.853494 containerd[1527]: 2025-12-12 18:40:11.797 [INFO][4325] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.196/26] block=192.168.3.192/26 handle="k8s-pod-network.f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:11.853494 containerd[1527]: 2025-12-12 18:40:11.797 [INFO][4325] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.196/26] handle="k8s-pod-network.f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:11.853494 containerd[1527]: 2025-12-12 18:40:11.797 [INFO][4325] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:40:11.853494 containerd[1527]: 2025-12-12 18:40:11.797 [INFO][4325] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.196/26] IPv6=[] ContainerID="f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2" HandleID="k8s-pod-network.f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2" Workload="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--wbbnc-eth0" Dec 12 18:40:11.854721 containerd[1527]: 2025-12-12 18:40:11.805 [INFO][4307] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7c9749d-wbbnc" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--wbbnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--wbbnc-eth0", GenerateName:"calico-apiserver-7cb7c9749d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a92ba609-6740-41f9-b09b-6687ca1c0ede", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cb7c9749d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-f-e155308a0b", ContainerID:"", Pod:"calico-apiserver-7cb7c9749d-wbbnc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali09e17b9bfff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:40:11.854862 containerd[1527]: 2025-12-12 18:40:11.805 [INFO][4307] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.196/32] ContainerID="f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7c9749d-wbbnc" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--wbbnc-eth0" Dec 12 18:40:11.854862 containerd[1527]: 2025-12-12 18:40:11.805 [INFO][4307] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09e17b9bfff ContainerID="f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7c9749d-wbbnc" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--wbbnc-eth0" Dec 12 18:40:11.854862 containerd[1527]: 2025-12-12 18:40:11.820 [INFO][4307] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7c9749d-wbbnc" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--wbbnc-eth0" Dec 12 18:40:11.855491 containerd[1527]: 2025-12-12 18:40:11.821 [INFO][4307] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7c9749d-wbbnc" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--wbbnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--wbbnc-eth0", GenerateName:"calico-apiserver-7cb7c9749d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a92ba609-6740-41f9-b09b-6687ca1c0ede", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cb7c9749d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-f-e155308a0b", ContainerID:"f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2", Pod:"calico-apiserver-7cb7c9749d-wbbnc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali09e17b9bfff", MAC:"16:93:ad:60:44:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:40:11.855630 containerd[1527]: 2025-12-12 18:40:11.841 [INFO][4307] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7c9749d-wbbnc" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--apiserver--7cb7c9749d--wbbnc-eth0" Dec 12 18:40:11.922868 kubelet[2702]: E1212 18:40:11.922094 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7c9749d-j4f49" podUID="4b3492e4-bb68-4901-abef-c77777942b00" Dec 12 18:40:11.929837 kubelet[2702]: E1212 18:40:11.929473 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6s6gc" podUID="b0d854b1-0e79-4d9c-914e-0384874af8c1" Dec 12 18:40:11.950198 containerd[1527]: time="2025-12-12T18:40:11.949946693Z" level=info msg="connecting to shim f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2" address="unix:///run/containerd/s/b7328f1d37d9af913f5e7bbaa8dabd3f2673978a456f5a9bc7ec5746810ab604" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:40:12.029208 systemd-networkd[1437]: cali595e2d405de: Gained IPv6LL Dec 12 18:40:12.029486 systemd[1]: Started cri-containerd-f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2.scope - libcontainer container f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2. Dec 12 18:40:12.087200 systemd-networkd[1437]: cali7bf6bf83f2c: Link UP Dec 12 18:40:12.090217 systemd-networkd[1437]: cali7bf6bf83f2c: Gained carrier Dec 12 18:40:12.129056 containerd[1527]: 2025-12-12 18:40:11.670 [INFO][4298] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--f--e155308a0b-k8s-calico--kube--controllers--65df8ff88f--mf8qj-eth0 calico-kube-controllers-65df8ff88f- calico-system 779cb8c2-ad5e-4f5f-95e6-a4da96b7be10 894 0 2025-12-12 18:39:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65df8ff88f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.2-f-e155308a0b calico-kube-controllers-65df8ff88f-mf8qj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7bf6bf83f2c [] [] }} ContainerID="734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6" Namespace="calico-system" Pod="calico-kube-controllers-65df8ff88f-mf8qj" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--kube--controllers--65df8ff88f--mf8qj-" Dec 12 18:40:12.129056 containerd[1527]: 2025-12-12 18:40:11.671 [INFO][4298] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6" Namespace="calico-system" Pod="calico-kube-controllers-65df8ff88f-mf8qj" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--kube--controllers--65df8ff88f--mf8qj-eth0" Dec 12 18:40:12.129056 containerd[1527]: 2025-12-12 18:40:11.770 [INFO][4331] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6" HandleID="k8s-pod-network.734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6" Workload="ci--4459.2.2--f--e155308a0b-k8s-calico--kube--controllers--65df8ff88f--mf8qj-eth0" Dec 12 18:40:12.129379 containerd[1527]: 2025-12-12 18:40:11.771 [INFO][4331] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6" HandleID="k8s-pod-network.734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6" Workload="ci--4459.2.2--f--e155308a0b-k8s-calico--kube--controllers--65df8ff88f--mf8qj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c3950), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-f-e155308a0b", "pod":"calico-kube-controllers-65df8ff88f-mf8qj", "timestamp":"2025-12-12 18:40:11.770018472 +0000 UTC"}, Hostname:"ci-4459.2.2-f-e155308a0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:40:12.129379 containerd[1527]: 2025-12-12 18:40:11.771 [INFO][4331] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:40:12.129379 containerd[1527]: 2025-12-12 18:40:11.797 [INFO][4331] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:40:12.129379 containerd[1527]: 2025-12-12 18:40:11.798 [INFO][4331] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-f-e155308a0b' Dec 12 18:40:12.129379 containerd[1527]: 2025-12-12 18:40:11.844 [INFO][4331] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:12.129379 containerd[1527]: 2025-12-12 18:40:11.863 [INFO][4331] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:12.129379 containerd[1527]: 2025-12-12 18:40:11.879 [INFO][4331] ipam/ipam.go 511: Trying affinity for 192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:12.129379 containerd[1527]: 2025-12-12 18:40:11.889 [INFO][4331] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:12.129379 containerd[1527]: 2025-12-12 18:40:11.904 [INFO][4331] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:12.129624 containerd[1527]: 2025-12-12 18:40:11.904 [INFO][4331] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:12.129624 containerd[1527]: 2025-12-12 18:40:11.915 [INFO][4331] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6 Dec 12 18:40:12.129624 containerd[1527]: 2025-12-12 18:40:11.995 [INFO][4331] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:12.129624 containerd[1527]: 2025-12-12 18:40:12.049 [INFO][4331] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.197/26] block=192.168.3.192/26 handle="k8s-pod-network.734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:12.129624 containerd[1527]: 2025-12-12 18:40:12.050 [INFO][4331] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.197/26] handle="k8s-pod-network.734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:12.129624 containerd[1527]: 2025-12-12 18:40:12.051 [INFO][4331] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:40:12.129624 containerd[1527]: 2025-12-12 18:40:12.051 [INFO][4331] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.197/26] IPv6=[] ContainerID="734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6" HandleID="k8s-pod-network.734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6" Workload="ci--4459.2.2--f--e155308a0b-k8s-calico--kube--controllers--65df8ff88f--mf8qj-eth0" Dec 12 18:40:12.129821 containerd[1527]: 2025-12-12 18:40:12.075 [INFO][4298] cni-plugin/k8s.go 418: Populated endpoint ContainerID="734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6" Namespace="calico-system" Pod="calico-kube-controllers-65df8ff88f-mf8qj" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--kube--controllers--65df8ff88f--mf8qj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--f--e155308a0b-k8s-calico--kube--controllers--65df8ff88f--mf8qj-eth0", GenerateName:"calico-kube-controllers-65df8ff88f-", Namespace:"calico-system", SelfLink:"", UID:"779cb8c2-ad5e-4f5f-95e6-a4da96b7be10", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65df8ff88f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-f-e155308a0b", ContainerID:"", Pod:"calico-kube-controllers-65df8ff88f-mf8qj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7bf6bf83f2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:40:12.129888 containerd[1527]: 2025-12-12 18:40:12.075 [INFO][4298] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.197/32] ContainerID="734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6" Namespace="calico-system" Pod="calico-kube-controllers-65df8ff88f-mf8qj" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--kube--controllers--65df8ff88f--mf8qj-eth0" Dec 12 18:40:12.129888 containerd[1527]: 2025-12-12 18:40:12.075 [INFO][4298] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7bf6bf83f2c ContainerID="734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6" Namespace="calico-system" Pod="calico-kube-controllers-65df8ff88f-mf8qj" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--kube--controllers--65df8ff88f--mf8qj-eth0" Dec 12 18:40:12.129888 containerd[1527]: 2025-12-12 18:40:12.090 [INFO][4298] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6" Namespace="calico-system" Pod="calico-kube-controllers-65df8ff88f-mf8qj" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--kube--controllers--65df8ff88f--mf8qj-eth0" Dec 12 18:40:12.129966 containerd[1527]: 2025-12-12 18:40:12.094 [INFO][4298] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6" Namespace="calico-system" Pod="calico-kube-controllers-65df8ff88f-mf8qj" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--kube--controllers--65df8ff88f--mf8qj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--f--e155308a0b-k8s-calico--kube--controllers--65df8ff88f--mf8qj-eth0", GenerateName:"calico-kube-controllers-65df8ff88f-", Namespace:"calico-system", SelfLink:"", UID:"779cb8c2-ad5e-4f5f-95e6-a4da96b7be10", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65df8ff88f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-f-e155308a0b", ContainerID:"734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6", Pod:"calico-kube-controllers-65df8ff88f-mf8qj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7bf6bf83f2c", MAC:"96:fa:d7:d2:b9:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:40:12.130024 containerd[1527]: 2025-12-12 18:40:12.123 [INFO][4298] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6" Namespace="calico-system" Pod="calico-kube-controllers-65df8ff88f-mf8qj" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-calico--kube--controllers--65df8ff88f--mf8qj-eth0" Dec 12 18:40:12.202518 containerd[1527]: time="2025-12-12T18:40:12.202398393Z" level=info msg="connecting to shim 734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6" address="unix:///run/containerd/s/a91f6891f2b2ba2bd6e169eb7312bc52f2a130223dc40f5967db30b42bcb826e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:40:12.250522 systemd[1]: Started cri-containerd-734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6.scope - libcontainer container 734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6. Dec 12 18:40:12.403199 containerd[1527]: time="2025-12-12T18:40:12.402830471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb7c9749d-wbbnc,Uid:a92ba609-6740-41f9-b09b-6687ca1c0ede,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f9c67bfb88667248284fdce4d8697979c1cdb6afe142439d5e8b792c40cd9cc2\"" Dec 12 18:40:12.408888 containerd[1527]: time="2025-12-12T18:40:12.408839527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:40:12.450566 containerd[1527]: time="2025-12-12T18:40:12.450518216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65df8ff88f-mf8qj,Uid:779cb8c2-ad5e-4f5f-95e6-a4da96b7be10,Namespace:calico-system,Attempt:0,} returns sandbox id \"734d8b8a94b5ceea974ae0938c3fa9975af547919c412b1c11fb19567747d2f6\"" Dec 12 18:40:12.591380 kubelet[2702]: E1212 18:40:12.590778 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:12.593583 containerd[1527]: time="2025-12-12T18:40:12.593514315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9kb82,Uid:8c3a3740-82ee-4426-99af-2d870802441b,Namespace:kube-system,Attempt:0,}" Dec 12 18:40:12.726645 containerd[1527]: time="2025-12-12T18:40:12.726449509Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:12.727444 containerd[1527]: time="2025-12-12T18:40:12.727400821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:40:12.728103 containerd[1527]: time="2025-12-12T18:40:12.727560893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:40:12.728486 kubelet[2702]: E1212 18:40:12.728453 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:40:12.728796 kubelet[2702]: E1212 18:40:12.728582 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:40:12.729515 kubelet[2702]: E1212 18:40:12.729307 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxqg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cb7c9749d-wbbnc_calico-apiserver(a92ba609-6740-41f9-b09b-6687ca1c0ede): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:12.730959 containerd[1527]: time="2025-12-12T18:40:12.729772317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:40:12.731155 kubelet[2702]: E1212 18:40:12.730848 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7c9749d-wbbnc" podUID="a92ba609-6740-41f9-b09b-6687ca1c0ede" Dec 12 18:40:12.783435 systemd-networkd[1437]: calidf26de926af: Link UP Dec 12 18:40:12.784581 systemd-networkd[1437]: calidf26de926af: Gained carrier Dec 12 18:40:12.821576 containerd[1527]: 2025-12-12 18:40:12.647 [INFO][4451] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--9kb82-eth0 coredns-668d6bf9bc- kube-system 8c3a3740-82ee-4426-99af-2d870802441b 889 0 2025-12-12 18:39:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-f-e155308a0b coredns-668d6bf9bc-9kb82 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidf26de926af [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-9kb82" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--9kb82-" Dec 12 18:40:12.821576 containerd[1527]: 2025-12-12 18:40:12.648 [INFO][4451] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-9kb82" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--9kb82-eth0" Dec 12 18:40:12.821576 containerd[1527]: 2025-12-12 18:40:12.697 [INFO][4464] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd" HandleID="k8s-pod-network.460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd" Workload="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--9kb82-eth0" Dec 12 18:40:12.822291 containerd[1527]: 2025-12-12 18:40:12.697 [INFO][4464] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd" HandleID="k8s-pod-network.460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd" Workload="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--9kb82-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-f-e155308a0b", "pod":"coredns-668d6bf9bc-9kb82", "timestamp":"2025-12-12 18:40:12.697180713 +0000 UTC"}, Hostname:"ci-4459.2.2-f-e155308a0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:40:12.822291 containerd[1527]: 2025-12-12 18:40:12.697 [INFO][4464] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:40:12.822291 containerd[1527]: 2025-12-12 18:40:12.697 [INFO][4464] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:40:12.822291 containerd[1527]: 2025-12-12 18:40:12.697 [INFO][4464] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-f-e155308a0b' Dec 12 18:40:12.822291 containerd[1527]: 2025-12-12 18:40:12.708 [INFO][4464] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:12.822291 containerd[1527]: 2025-12-12 18:40:12.720 [INFO][4464] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:12.822291 containerd[1527]: 2025-12-12 18:40:12.731 [INFO][4464] ipam/ipam.go 511: Trying affinity for 192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:12.822291 containerd[1527]: 2025-12-12 18:40:12.739 [INFO][4464] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:12.822291 containerd[1527]: 2025-12-12 18:40:12.744 [INFO][4464] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:12.823338 containerd[1527]: 2025-12-12 18:40:12.746 [INFO][4464] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:12.823338 containerd[1527]: 2025-12-12 18:40:12.750 [INFO][4464] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd Dec 12 18:40:12.823338 containerd[1527]: 2025-12-12 18:40:12.758 [INFO][4464] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:12.823338 containerd[1527]: 2025-12-12 18:40:12.768 [INFO][4464] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.198/26] block=192.168.3.192/26 handle="k8s-pod-network.460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:12.823338 containerd[1527]: 2025-12-12 18:40:12.768 [INFO][4464] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.198/26] handle="k8s-pod-network.460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:12.823338 containerd[1527]: 2025-12-12 18:40:12.768 [INFO][4464] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:40:12.823338 containerd[1527]: 2025-12-12 18:40:12.769 [INFO][4464] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.198/26] IPv6=[] ContainerID="460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd" HandleID="k8s-pod-network.460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd" Workload="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--9kb82-eth0" Dec 12 18:40:12.825628 containerd[1527]: 2025-12-12 18:40:12.775 [INFO][4451] cni-plugin/k8s.go 418: Populated endpoint ContainerID="460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-9kb82" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--9kb82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--9kb82-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8c3a3740-82ee-4426-99af-2d870802441b", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-f-e155308a0b", ContainerID:"", Pod:"coredns-668d6bf9bc-9kb82", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf26de926af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:40:12.825628 containerd[1527]: 2025-12-12 18:40:12.776 [INFO][4451] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.198/32] ContainerID="460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-9kb82" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--9kb82-eth0" Dec 12 18:40:12.825628 containerd[1527]: 2025-12-12 18:40:12.776 [INFO][4451] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf26de926af ContainerID="460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-9kb82" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--9kb82-eth0" Dec 12 18:40:12.825628 containerd[1527]: 2025-12-12 18:40:12.785 [INFO][4451] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-9kb82" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--9kb82-eth0" Dec 12 18:40:12.825628 containerd[1527]: 2025-12-12 18:40:12.786 [INFO][4451] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-9kb82" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--9kb82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--9kb82-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8c3a3740-82ee-4426-99af-2d870802441b", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-f-e155308a0b", ContainerID:"460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd", Pod:"coredns-668d6bf9bc-9kb82", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf26de926af", MAC:"72:98:03:31:87:c6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:40:12.825628 containerd[1527]: 2025-12-12 18:40:12.817 [INFO][4451] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-9kb82" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--9kb82-eth0" Dec 12 18:40:12.860365 containerd[1527]: time="2025-12-12T18:40:12.860293858Z" level=info msg="connecting to shim 460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd" address="unix:///run/containerd/s/4395bb7b7ce9923acc41db3632d35d714887099c899e59047b74f4b6d1ebce19" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:40:12.917588 systemd[1]: Started cri-containerd-460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd.scope - libcontainer container 460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd. Dec 12 18:40:12.929497 kubelet[2702]: E1212 18:40:12.929321 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7c9749d-wbbnc" podUID="a92ba609-6740-41f9-b09b-6687ca1c0ede" Dec 12 18:40:12.932690 kubelet[2702]: E1212 18:40:12.932158 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6s6gc" podUID="b0d854b1-0e79-4d9c-914e-0384874af8c1" Dec 12 18:40:13.012804 containerd[1527]: time="2025-12-12T18:40:13.012240991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9kb82,Uid:8c3a3740-82ee-4426-99af-2d870802441b,Namespace:kube-system,Attempt:0,} returns sandbox id \"460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd\"" Dec 12 18:40:13.020847 kubelet[2702]: E1212 18:40:13.020784 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:13.025830 containerd[1527]: time="2025-12-12T18:40:13.025771972Z" level=info msg="CreateContainer within sandbox \"460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:40:13.040462 containerd[1527]: time="2025-12-12T18:40:13.039181148Z" level=info msg="Container be81bd9bee6b14b275bde1ebf18c8e1bcb67c1df27ef864a27cc45e0dd88913b: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:40:13.052805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3913491334.mount: Deactivated successfully. Dec 12 18:40:13.072836 containerd[1527]: time="2025-12-12T18:40:13.072764513Z" level=info msg="CreateContainer within sandbox \"460cd98133ad3f292615d6279ec869e5707d691e359676a1c11f58f6341502cd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"be81bd9bee6b14b275bde1ebf18c8e1bcb67c1df27ef864a27cc45e0dd88913b\"" Dec 12 18:40:13.076296 containerd[1527]: time="2025-12-12T18:40:13.076247172Z" level=info msg="StartContainer for \"be81bd9bee6b14b275bde1ebf18c8e1bcb67c1df27ef864a27cc45e0dd88913b\"" Dec 12 18:40:13.078230 containerd[1527]: time="2025-12-12T18:40:13.078115351Z" level=info msg="connecting to shim be81bd9bee6b14b275bde1ebf18c8e1bcb67c1df27ef864a27cc45e0dd88913b" address="unix:///run/containerd/s/4395bb7b7ce9923acc41db3632d35d714887099c899e59047b74f4b6d1ebce19" protocol=ttrpc version=3 Dec 12 18:40:13.082839 containerd[1527]: time="2025-12-12T18:40:13.081927604Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:13.087928 containerd[1527]: time="2025-12-12T18:40:13.087627379Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:40:13.088718 containerd[1527]: time="2025-12-12T18:40:13.088166089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:40:13.091278 kubelet[2702]: E1212 18:40:13.089153 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:40:13.091278 kubelet[2702]: E1212 18:40:13.089205 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:40:13.091278 kubelet[2702]: E1212 18:40:13.090700 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rm2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-65df8ff88f-mf8qj_calico-system(779cb8c2-ad5e-4f5f-95e6-a4da96b7be10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:13.093699 kubelet[2702]: E1212 18:40:13.093592 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65df8ff88f-mf8qj" podUID="779cb8c2-ad5e-4f5f-95e6-a4da96b7be10" Dec 12 18:40:13.138323 systemd[1]: Started cri-containerd-be81bd9bee6b14b275bde1ebf18c8e1bcb67c1df27ef864a27cc45e0dd88913b.scope - libcontainer container be81bd9bee6b14b275bde1ebf18c8e1bcb67c1df27ef864a27cc45e0dd88913b. Dec 12 18:40:13.213266 containerd[1527]: time="2025-12-12T18:40:13.213209529Z" level=info msg="StartContainer for \"be81bd9bee6b14b275bde1ebf18c8e1bcb67c1df27ef864a27cc45e0dd88913b\" returns successfully" Dec 12 18:40:13.592536 kubelet[2702]: E1212 18:40:13.592489 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:13.594725 containerd[1527]: time="2025-12-12T18:40:13.593550528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s9cv7,Uid:daebf512-6449-40e5-9d51-ce9fe62d676e,Namespace:kube-system,Attempt:0,}" Dec 12 18:40:13.784163 systemd-networkd[1437]: cali09c11431413: Link UP Dec 12 18:40:13.784631 systemd-networkd[1437]: cali09c11431413: Gained carrier Dec 12 18:40:13.818348 containerd[1527]: 2025-12-12 18:40:13.668 [INFO][4560] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--s9cv7-eth0 coredns-668d6bf9bc- kube-system daebf512-6449-40e5-9d51-ce9fe62d676e 881 0 2025-12-12 18:39:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-f-e155308a0b coredns-668d6bf9bc-s9cv7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali09c11431413 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb" Namespace="kube-system" Pod="coredns-668d6bf9bc-s9cv7" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--s9cv7-" Dec 12 18:40:13.818348 containerd[1527]: 2025-12-12 18:40:13.668 [INFO][4560] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb" Namespace="kube-system" Pod="coredns-668d6bf9bc-s9cv7" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--s9cv7-eth0" Dec 12 18:40:13.818348 containerd[1527]: 2025-12-12 18:40:13.714 [INFO][4572] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb" HandleID="k8s-pod-network.82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb" Workload="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--s9cv7-eth0" Dec 12 18:40:13.818348 containerd[1527]: 2025-12-12 18:40:13.714 [INFO][4572] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb" HandleID="k8s-pod-network.82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb" Workload="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--s9cv7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-f-e155308a0b", "pod":"coredns-668d6bf9bc-s9cv7", "timestamp":"2025-12-12 18:40:13.714416875 +0000 UTC"}, Hostname:"ci-4459.2.2-f-e155308a0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:40:13.818348 containerd[1527]: 2025-12-12 18:40:13.714 [INFO][4572] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:40:13.818348 containerd[1527]: 2025-12-12 18:40:13.714 [INFO][4572] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:40:13.818348 containerd[1527]: 2025-12-12 18:40:13.714 [INFO][4572] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-f-e155308a0b' Dec 12 18:40:13.818348 containerd[1527]: 2025-12-12 18:40:13.725 [INFO][4572] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:13.818348 containerd[1527]: 2025-12-12 18:40:13.734 [INFO][4572] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:13.818348 containerd[1527]: 2025-12-12 18:40:13.741 [INFO][4572] ipam/ipam.go 511: Trying affinity for 192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:13.818348 containerd[1527]: 2025-12-12 18:40:13.744 [INFO][4572] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:13.818348 containerd[1527]: 2025-12-12 18:40:13.749 [INFO][4572] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:13.818348 containerd[1527]: 2025-12-12 18:40:13.749 [INFO][4572] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:13.818348 containerd[1527]: 2025-12-12 18:40:13.753 [INFO][4572] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb Dec 12 18:40:13.818348 containerd[1527]: 2025-12-12 18:40:13.760 [INFO][4572] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:13.818348 containerd[1527]: 2025-12-12 18:40:13.771 [INFO][4572] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.199/26] block=192.168.3.192/26 handle="k8s-pod-network.82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:13.818348 containerd[1527]: 2025-12-12 18:40:13.772 [INFO][4572] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.199/26] handle="k8s-pod-network.82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:13.818348 containerd[1527]: 2025-12-12 18:40:13.772 [INFO][4572] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:40:13.818348 containerd[1527]: 2025-12-12 18:40:13.772 [INFO][4572] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.199/26] IPv6=[] ContainerID="82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb" HandleID="k8s-pod-network.82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb" Workload="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--s9cv7-eth0" Dec 12 18:40:13.821301 containerd[1527]: 2025-12-12 18:40:13.777 [INFO][4560] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb" Namespace="kube-system" Pod="coredns-668d6bf9bc-s9cv7" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--s9cv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--s9cv7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"daebf512-6449-40e5-9d51-ce9fe62d676e", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-f-e155308a0b", ContainerID:"", Pod:"coredns-668d6bf9bc-s9cv7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09c11431413", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:40:13.821301 containerd[1527]: 2025-12-12 18:40:13.777 [INFO][4560] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.199/32] ContainerID="82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb" Namespace="kube-system" Pod="coredns-668d6bf9bc-s9cv7" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--s9cv7-eth0" Dec 12 18:40:13.821301 containerd[1527]: 2025-12-12 18:40:13.777 [INFO][4560] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09c11431413 ContainerID="82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb" Namespace="kube-system" Pod="coredns-668d6bf9bc-s9cv7" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--s9cv7-eth0" Dec 12 18:40:13.821301 containerd[1527]: 2025-12-12 18:40:13.781 [INFO][4560] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb" Namespace="kube-system" Pod="coredns-668d6bf9bc-s9cv7" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--s9cv7-eth0" Dec 12 18:40:13.821301 containerd[1527]: 2025-12-12 18:40:13.784 [INFO][4560] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb" Namespace="kube-system" Pod="coredns-668d6bf9bc-s9cv7" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--s9cv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--s9cv7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"daebf512-6449-40e5-9d51-ce9fe62d676e", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-f-e155308a0b", ContainerID:"82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb", Pod:"coredns-668d6bf9bc-s9cv7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09c11431413", MAC:"fe:03:bb:23:99:02", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:40:13.821301 containerd[1527]: 2025-12-12 18:40:13.813 [INFO][4560] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb" Namespace="kube-system" Pod="coredns-668d6bf9bc-s9cv7" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-coredns--668d6bf9bc--s9cv7-eth0" Dec 12 18:40:13.854378 containerd[1527]: time="2025-12-12T18:40:13.853573625Z" level=info msg="connecting to shim 82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb" address="unix:///run/containerd/s/96c65dde5eb74dd2de4aa6f4004588181a166b319de76b9928fc2ea19efced63" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:40:13.885349 systemd-networkd[1437]: cali09e17b9bfff: Gained IPv6LL Dec 12 18:40:13.886419 systemd-networkd[1437]: cali7bf6bf83f2c: Gained IPv6LL Dec 12 18:40:13.928653 systemd[1]: Started cri-containerd-82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb.scope - libcontainer container 82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb. Dec 12 18:40:13.939343 kubelet[2702]: E1212 18:40:13.938981 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7c9749d-wbbnc" podUID="a92ba609-6740-41f9-b09b-6687ca1c0ede" Dec 12 18:40:13.939343 kubelet[2702]: E1212 18:40:13.939140 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:13.939837 kubelet[2702]: E1212 18:40:13.939660 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65df8ff88f-mf8qj" podUID="779cb8c2-ad5e-4f5f-95e6-a4da96b7be10" Dec 12 18:40:14.013454 kubelet[2702]: I1212 18:40:14.013134 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9kb82" podStartSLOduration=47.013103831 podStartE2EDuration="47.013103831s" podCreationTimestamp="2025-12-12 18:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:40:14.009965867 +0000 UTC m=+52.599629045" watchObservedRunningTime="2025-12-12 18:40:14.013103831 +0000 UTC m=+52.602767018" Dec 12 18:40:14.037211 containerd[1527]: time="2025-12-12T18:40:14.037158598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s9cv7,Uid:daebf512-6449-40e5-9d51-ce9fe62d676e,Namespace:kube-system,Attempt:0,} returns sandbox id \"82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb\"" Dec 12 18:40:14.039406 kubelet[2702]: E1212 18:40:14.039374 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:14.043314 containerd[1527]: time="2025-12-12T18:40:14.043245997Z" level=info msg="CreateContainer within sandbox \"82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:40:14.059160 containerd[1527]: time="2025-12-12T18:40:14.058578501Z" level=info msg="Container 37072ef2b493b6461c1ee79d1d424b92c242f39841fc67fe06429058498ad883: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:40:14.064604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2975642402.mount: Deactivated successfully. Dec 12 18:40:14.073760 containerd[1527]: time="2025-12-12T18:40:14.073712280Z" level=info msg="CreateContainer within sandbox \"82a34e716c43758c6a79947523fa617805c867e8e567b8f84a662343d57f01eb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"37072ef2b493b6461c1ee79d1d424b92c242f39841fc67fe06429058498ad883\"" Dec 12 18:40:14.076231 containerd[1527]: time="2025-12-12T18:40:14.075695161Z" level=info msg="StartContainer for \"37072ef2b493b6461c1ee79d1d424b92c242f39841fc67fe06429058498ad883\"" Dec 12 18:40:14.078642 containerd[1527]: time="2025-12-12T18:40:14.078583846Z" level=info msg="connecting to shim 37072ef2b493b6461c1ee79d1d424b92c242f39841fc67fe06429058498ad883" address="unix:///run/containerd/s/96c65dde5eb74dd2de4aa6f4004588181a166b319de76b9928fc2ea19efced63" protocol=ttrpc version=3 Dec 12 18:40:14.117615 systemd[1]: Started cri-containerd-37072ef2b493b6461c1ee79d1d424b92c242f39841fc67fe06429058498ad883.scope - libcontainer container 37072ef2b493b6461c1ee79d1d424b92c242f39841fc67fe06429058498ad883. Dec 12 18:40:14.164866 containerd[1527]: time="2025-12-12T18:40:14.164828579Z" level=info msg="StartContainer for \"37072ef2b493b6461c1ee79d1d424b92c242f39841fc67fe06429058498ad883\" returns successfully" Dec 12 18:40:14.269657 systemd-networkd[1437]: calidf26de926af: Gained IPv6LL Dec 12 18:40:14.591590 containerd[1527]: time="2025-12-12T18:40:14.591502717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2flfr,Uid:efd42e2e-3a4f-425a-9c07-184c94bcbd7e,Namespace:calico-system,Attempt:0,}" Dec 12 18:40:14.749433 systemd-networkd[1437]: cali8867cbf7436: Link UP Dec 12 18:40:14.750838 systemd-networkd[1437]: cali8867cbf7436: Gained carrier Dec 12 18:40:14.777128 containerd[1527]: 2025-12-12 18:40:14.643 [INFO][4672] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--f--e155308a0b-k8s-csi--node--driver--2flfr-eth0 csi-node-driver- calico-system efd42e2e-3a4f-425a-9c07-184c94bcbd7e 777 0 2025-12-12 18:39:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.2-f-e155308a0b csi-node-driver-2flfr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8867cbf7436 [] [] }} ContainerID="f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6" Namespace="calico-system" Pod="csi-node-driver-2flfr" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-csi--node--driver--2flfr-" Dec 12 18:40:14.777128 containerd[1527]: 2025-12-12 18:40:14.643 [INFO][4672] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6" Namespace="calico-system" Pod="csi-node-driver-2flfr" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-csi--node--driver--2flfr-eth0" Dec 12 18:40:14.777128 containerd[1527]: 2025-12-12 18:40:14.686 [INFO][4685] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6" HandleID="k8s-pod-network.f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6" Workload="ci--4459.2.2--f--e155308a0b-k8s-csi--node--driver--2flfr-eth0" Dec 12 18:40:14.777128 containerd[1527]: 2025-12-12 18:40:14.687 [INFO][4685] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6" HandleID="k8s-pod-network.f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6" Workload="ci--4459.2.2--f--e155308a0b-k8s-csi--node--driver--2flfr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-f-e155308a0b", "pod":"csi-node-driver-2flfr", "timestamp":"2025-12-12 18:40:14.686918672 +0000 UTC"}, Hostname:"ci-4459.2.2-f-e155308a0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:40:14.777128 containerd[1527]: 2025-12-12 18:40:14.687 [INFO][4685] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:40:14.777128 containerd[1527]: 2025-12-12 18:40:14.687 [INFO][4685] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:40:14.777128 containerd[1527]: 2025-12-12 18:40:14.687 [INFO][4685] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-f-e155308a0b' Dec 12 18:40:14.777128 containerd[1527]: 2025-12-12 18:40:14.696 [INFO][4685] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:14.777128 containerd[1527]: 2025-12-12 18:40:14.705 [INFO][4685] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:14.777128 containerd[1527]: 2025-12-12 18:40:14.713 [INFO][4685] ipam/ipam.go 511: Trying affinity for 192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:14.777128 containerd[1527]: 2025-12-12 18:40:14.716 [INFO][4685] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:14.777128 containerd[1527]: 2025-12-12 18:40:14.719 [INFO][4685] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:14.777128 containerd[1527]: 2025-12-12 18:40:14.719 [INFO][4685] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:14.777128 containerd[1527]: 2025-12-12 18:40:14.724 [INFO][4685] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6 Dec 12 18:40:14.777128 containerd[1527]: 2025-12-12 18:40:14.731 [INFO][4685] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:14.777128 containerd[1527]: 2025-12-12 18:40:14.740 [INFO][4685] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.200/26] block=192.168.3.192/26 handle="k8s-pod-network.f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:14.777128 containerd[1527]: 2025-12-12 18:40:14.740 [INFO][4685] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.200/26] handle="k8s-pod-network.f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6" host="ci-4459.2.2-f-e155308a0b" Dec 12 18:40:14.777128 containerd[1527]: 2025-12-12 18:40:14.740 [INFO][4685] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:40:14.777128 containerd[1527]: 2025-12-12 18:40:14.740 [INFO][4685] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.200/26] IPv6=[] ContainerID="f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6" HandleID="k8s-pod-network.f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6" Workload="ci--4459.2.2--f--e155308a0b-k8s-csi--node--driver--2flfr-eth0" Dec 12 18:40:14.779420 containerd[1527]: 2025-12-12 18:40:14.743 [INFO][4672] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6" Namespace="calico-system" Pod="csi-node-driver-2flfr" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-csi--node--driver--2flfr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--f--e155308a0b-k8s-csi--node--driver--2flfr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"efd42e2e-3a4f-425a-9c07-184c94bcbd7e", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-f-e155308a0b", ContainerID:"", Pod:"csi-node-driver-2flfr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8867cbf7436", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:40:14.779420 containerd[1527]: 2025-12-12 18:40:14.743 [INFO][4672] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.200/32] ContainerID="f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6" Namespace="calico-system" Pod="csi-node-driver-2flfr" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-csi--node--driver--2flfr-eth0" Dec 12 18:40:14.779420 containerd[1527]: 2025-12-12 18:40:14.743 [INFO][4672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8867cbf7436 ContainerID="f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6" Namespace="calico-system" Pod="csi-node-driver-2flfr" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-csi--node--driver--2flfr-eth0" Dec 12 18:40:14.779420 containerd[1527]: 2025-12-12 18:40:14.751 [INFO][4672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6" Namespace="calico-system" Pod="csi-node-driver-2flfr" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-csi--node--driver--2flfr-eth0" Dec 12 18:40:14.779420 containerd[1527]: 2025-12-12 18:40:14.752 [INFO][4672] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6" Namespace="calico-system" Pod="csi-node-driver-2flfr" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-csi--node--driver--2flfr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--f--e155308a0b-k8s-csi--node--driver--2flfr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"efd42e2e-3a4f-425a-9c07-184c94bcbd7e", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-f-e155308a0b", ContainerID:"f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6", Pod:"csi-node-driver-2flfr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8867cbf7436", MAC:"36:1d:9c:a4:f3:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:40:14.779420 containerd[1527]: 2025-12-12 18:40:14.771 [INFO][4672] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6" Namespace="calico-system" Pod="csi-node-driver-2flfr" WorkloadEndpoint="ci--4459.2.2--f--e155308a0b-k8s-csi--node--driver--2flfr-eth0" Dec 12 18:40:14.801211 containerd[1527]: time="2025-12-12T18:40:14.801080383Z" level=info msg="connecting to shim f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6" address="unix:///run/containerd/s/98791a5c16f40ef0c8e3dc236ca2bc4f6f89633631d7b7d62597313e15bc4a4e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:40:14.837282 systemd[1]: Started cri-containerd-f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6.scope - libcontainer container f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6. Dec 12 18:40:14.912076 containerd[1527]: time="2025-12-12T18:40:14.911974867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2flfr,Uid:efd42e2e-3a4f-425a-9c07-184c94bcbd7e,Namespace:calico-system,Attempt:0,} returns sandbox id \"f4381ba54d37c0bfe0d6b3f078266260f5592bbaaab275eac5b97fd4e0287af6\"" Dec 12 18:40:14.915970 containerd[1527]: time="2025-12-12T18:40:14.915778512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:40:14.943314 kubelet[2702]: E1212 18:40:14.943281 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:14.944774 kubelet[2702]: E1212 18:40:14.943890 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:14.990102 kubelet[2702]: I1212 18:40:14.989967 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s9cv7" podStartSLOduration=47.989902625 podStartE2EDuration="47.989902625s" podCreationTimestamp="2025-12-12 18:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:40:14.988650849 +0000 UTC m=+53.578314034" watchObservedRunningTime="2025-12-12 18:40:14.989902625 +0000 UTC m=+53.579565819" Dec 12 18:40:15.293099 containerd[1527]: time="2025-12-12T18:40:15.292931308Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:15.294446 containerd[1527]: time="2025-12-12T18:40:15.294335142Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:40:15.294571 containerd[1527]: time="2025-12-12T18:40:15.294441683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:40:15.294858 kubelet[2702]: E1212 18:40:15.294800 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:40:15.295136 kubelet[2702]: E1212 18:40:15.294990 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:40:15.297092 kubelet[2702]: E1212 18:40:15.296996 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lw94t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2flfr_calico-system(efd42e2e-3a4f-425a-9c07-184c94bcbd7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:15.299696 containerd[1527]: time="2025-12-12T18:40:15.299663734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:40:15.625253 containerd[1527]: time="2025-12-12T18:40:15.625093207Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:15.626322 containerd[1527]: time="2025-12-12T18:40:15.626255115Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:40:15.626442 containerd[1527]: time="2025-12-12T18:40:15.626385187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:40:15.626960 kubelet[2702]: E1212 18:40:15.626611 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:40:15.626960 kubelet[2702]: E1212 18:40:15.626684 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:40:15.626960 kubelet[2702]: E1212 18:40:15.626860 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lw94t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2flfr_calico-system(efd42e2e-3a4f-425a-9c07-184c94bcbd7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:15.628199 kubelet[2702]: E1212 18:40:15.628119 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2flfr" podUID="efd42e2e-3a4f-425a-9c07-184c94bcbd7e" Dec 12 18:40:15.805216 systemd-networkd[1437]: cali09c11431413: Gained IPv6LL Dec 12 18:40:15.947260 kubelet[2702]: E1212 18:40:15.945971 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:15.948868 kubelet[2702]: E1212 18:40:15.948765 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:15.949435 kubelet[2702]: E1212 18:40:15.949390 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2flfr" podUID="efd42e2e-3a4f-425a-9c07-184c94bcbd7e" Dec 12 18:40:16.189631 systemd-networkd[1437]: cali8867cbf7436: Gained IPv6LL Dec 12 18:40:16.948515 kubelet[2702]: E1212 18:40:16.948399 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:21.205306 systemd[1]: Started sshd@7-143.198.226.225:22-147.75.109.163:60262.service - OpenSSH per-connection server daemon (147.75.109.163:60262). Dec 12 18:40:21.320079 sshd[4772]: Accepted publickey for core from 147.75.109.163 port 60262 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:21.321968 sshd-session[4772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:21.332147 systemd-logind[1510]: New session 8 of user core. Dec 12 18:40:21.338315 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:40:21.616227 sshd[4775]: Connection closed by 147.75.109.163 port 60262 Dec 12 18:40:21.616766 sshd-session[4772]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:21.634184 systemd-logind[1510]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:40:21.634588 systemd[1]: sshd@7-143.198.226.225:22-147.75.109.163:60262.service: Deactivated successfully. Dec 12 18:40:21.640241 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:40:21.650125 systemd-logind[1510]: Removed session 8. Dec 12 18:40:23.595151 containerd[1527]: time="2025-12-12T18:40:23.594856302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:40:23.941560 containerd[1527]: time="2025-12-12T18:40:23.941396941Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:23.942121 containerd[1527]: time="2025-12-12T18:40:23.942074098Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:40:23.942195 containerd[1527]: time="2025-12-12T18:40:23.942165892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:40:23.942688 kubelet[2702]: E1212 18:40:23.942392 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:40:23.942688 kubelet[2702]: E1212 18:40:23.942453 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:40:23.942688 kubelet[2702]: E1212 18:40:23.942632 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pv5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cb7c9749d-j4f49_calico-apiserver(4b3492e4-bb68-4901-abef-c77777942b00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:23.943858 kubelet[2702]: E1212 18:40:23.943808 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7c9749d-j4f49" podUID="4b3492e4-bb68-4901-abef-c77777942b00" Dec 12 18:40:25.592749 containerd[1527]: time="2025-12-12T18:40:25.592670366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:40:25.980466 containerd[1527]: time="2025-12-12T18:40:25.980314328Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:25.981842 containerd[1527]: time="2025-12-12T18:40:25.981734498Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:40:25.981842 containerd[1527]: time="2025-12-12T18:40:25.981794246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:40:25.982337 kubelet[2702]: E1212 18:40:25.982261 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:40:25.982337 kubelet[2702]: E1212 18:40:25.982344 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:40:25.982911 kubelet[2702]: E1212 18:40:25.982475 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b5abad96921f40f3a1fb0a13899f51dc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8jw2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-686df6b99b-m5p6t_calico-system(b68d5b78-37a9-4c51-a50e-f394db8ab487): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:25.985164 containerd[1527]: time="2025-12-12T18:40:25.985093448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:40:26.284063 containerd[1527]: time="2025-12-12T18:40:26.283849191Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:26.285524 containerd[1527]: time="2025-12-12T18:40:26.285379267Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:40:26.285524 containerd[1527]: time="2025-12-12T18:40:26.285486965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:40:26.286058 kubelet[2702]: E1212 18:40:26.285877 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:40:26.286058 kubelet[2702]: E1212 18:40:26.285933 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:40:26.286264 kubelet[2702]: E1212 18:40:26.286222 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jw2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-686df6b99b-m5p6t_calico-system(b68d5b78-37a9-4c51-a50e-f394db8ab487): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:26.287584 kubelet[2702]: E1212 18:40:26.287529 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-686df6b99b-m5p6t" podUID="b68d5b78-37a9-4c51-a50e-f394db8ab487" Dec 12 18:40:26.593835 containerd[1527]: time="2025-12-12T18:40:26.592570781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:40:26.636387 systemd[1]: Started sshd@8-143.198.226.225:22-147.75.109.163:39966.service - OpenSSH per-connection server daemon (147.75.109.163:39966). Dec 12 18:40:26.718449 sshd[4793]: Accepted publickey for core from 147.75.109.163 port 39966 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:26.720149 sshd-session[4793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:26.728544 systemd-logind[1510]: New session 9 of user core. Dec 12 18:40:26.733331 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:40:26.893473 sshd[4796]: Connection closed by 147.75.109.163 port 39966 Dec 12 18:40:26.894601 sshd-session[4793]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:26.900376 systemd[1]: sshd@8-143.198.226.225:22-147.75.109.163:39966.service: Deactivated successfully. Dec 12 18:40:26.903566 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:40:26.906789 systemd-logind[1510]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:40:26.908669 systemd-logind[1510]: Removed session 9. Dec 12 18:40:26.929383 containerd[1527]: time="2025-12-12T18:40:26.929312284Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:26.930381 containerd[1527]: time="2025-12-12T18:40:26.930263661Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:40:26.930381 containerd[1527]: time="2025-12-12T18:40:26.930324221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:40:26.930657 kubelet[2702]: E1212 18:40:26.930590 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:40:26.930793 kubelet[2702]: E1212 18:40:26.930672 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:40:26.930927 kubelet[2702]: E1212 18:40:26.930863 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cwmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-6s6gc_calico-system(b0d854b1-0e79-4d9c-914e-0384874af8c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:26.932686 kubelet[2702]: E1212 18:40:26.932624 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6s6gc" podUID="b0d854b1-0e79-4d9c-914e-0384874af8c1" Dec 12 18:40:28.592377 containerd[1527]: time="2025-12-12T18:40:28.592062015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:40:28.917328 containerd[1527]: time="2025-12-12T18:40:28.917183407Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:28.917969 containerd[1527]: time="2025-12-12T18:40:28.917912583Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:40:28.918112 containerd[1527]: time="2025-12-12T18:40:28.918005173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:40:28.918256 kubelet[2702]: E1212 18:40:28.918205 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:40:28.918716 kubelet[2702]: E1212 18:40:28.918263 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:40:28.918716 kubelet[2702]: E1212 18:40:28.918405 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rm2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-65df8ff88f-mf8qj_calico-system(779cb8c2-ad5e-4f5f-95e6-a4da96b7be10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:28.919548 kubelet[2702]: E1212 18:40:28.919496 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65df8ff88f-mf8qj" podUID="779cb8c2-ad5e-4f5f-95e6-a4da96b7be10" Dec 12 18:40:29.594430 containerd[1527]: time="2025-12-12T18:40:29.593880443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:40:29.899737 containerd[1527]: time="2025-12-12T18:40:29.899584913Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:29.900861 containerd[1527]: time="2025-12-12T18:40:29.900816279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:40:29.901096 containerd[1527]: time="2025-12-12T18:40:29.900912779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:40:29.901159 kubelet[2702]: E1212 18:40:29.901105 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:40:29.901238 kubelet[2702]: E1212 18:40:29.901177 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:40:29.901770 containerd[1527]: time="2025-12-12T18:40:29.901499872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:40:29.902100 kubelet[2702]: E1212 18:40:29.901959 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxqg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cb7c9749d-wbbnc_calico-apiserver(a92ba609-6740-41f9-b09b-6687ca1c0ede): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:29.903262 kubelet[2702]: E1212 18:40:29.903177 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7c9749d-wbbnc" podUID="a92ba609-6740-41f9-b09b-6687ca1c0ede" Dec 12 18:40:30.214913 containerd[1527]: time="2025-12-12T18:40:30.214748982Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:30.215738 containerd[1527]: time="2025-12-12T18:40:30.215651380Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:40:30.215738 containerd[1527]: time="2025-12-12T18:40:30.215701598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:40:30.216091 kubelet[2702]: E1212 18:40:30.216002 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:40:30.216886 kubelet[2702]: E1212 18:40:30.216105 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:40:30.216886 kubelet[2702]: E1212 18:40:30.216279 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lw94t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2flfr_calico-system(efd42e2e-3a4f-425a-9c07-184c94bcbd7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:30.219779 containerd[1527]: time="2025-12-12T18:40:30.219713796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:40:30.545560 containerd[1527]: time="2025-12-12T18:40:30.545478571Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:30.546451 containerd[1527]: time="2025-12-12T18:40:30.546393191Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:40:30.546614 containerd[1527]: time="2025-12-12T18:40:30.546502615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:40:30.547107 kubelet[2702]: E1212 18:40:30.546822 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:40:30.547107 kubelet[2702]: E1212 18:40:30.546890 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:40:30.547107 kubelet[2702]: E1212 18:40:30.547042 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lw94t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2flfr_calico-system(efd42e2e-3a4f-425a-9c07-184c94bcbd7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:30.548560 kubelet[2702]: E1212 18:40:30.548496 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2flfr" podUID="efd42e2e-3a4f-425a-9c07-184c94bcbd7e" Dec 12 18:40:31.593164 kubelet[2702]: E1212 18:40:31.592154 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:31.914426 systemd[1]: Started sshd@9-143.198.226.225:22-147.75.109.163:39978.service - OpenSSH per-connection server daemon (147.75.109.163:39978). Dec 12 18:40:32.010120 sshd[4818]: Accepted publickey for core from 147.75.109.163 port 39978 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:32.012599 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:32.019238 systemd-logind[1510]: New session 10 of user core. Dec 12 18:40:32.027381 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:40:32.235757 sshd[4821]: Connection closed by 147.75.109.163 port 39978 Dec 12 18:40:32.236736 sshd-session[4818]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:32.253330 systemd[1]: sshd@9-143.198.226.225:22-147.75.109.163:39978.service: Deactivated successfully. Dec 12 18:40:32.256537 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:40:32.258543 systemd-logind[1510]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:40:32.264216 systemd[1]: Started sshd@10-143.198.226.225:22-147.75.109.163:43150.service - OpenSSH per-connection server daemon (147.75.109.163:43150). Dec 12 18:40:32.266956 systemd-logind[1510]: Removed session 10. Dec 12 18:40:32.373687 sshd[4834]: Accepted publickey for core from 147.75.109.163 port 43150 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:32.377247 sshd-session[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:32.391567 systemd-logind[1510]: New session 11 of user core. Dec 12 18:40:32.397336 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:40:32.632683 sshd[4837]: Connection closed by 147.75.109.163 port 43150 Dec 12 18:40:32.633670 sshd-session[4834]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:32.650535 systemd[1]: sshd@10-143.198.226.225:22-147.75.109.163:43150.service: Deactivated successfully. Dec 12 18:40:32.657850 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:40:32.666236 systemd-logind[1510]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:40:32.675108 systemd[1]: Started sshd@11-143.198.226.225:22-147.75.109.163:43164.service - OpenSSH per-connection server daemon (147.75.109.163:43164). Dec 12 18:40:32.679044 systemd-logind[1510]: Removed session 11. Dec 12 18:40:32.785711 sshd[4846]: Accepted publickey for core from 147.75.109.163 port 43164 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:32.788935 sshd-session[4846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:32.796131 systemd-logind[1510]: New session 12 of user core. Dec 12 18:40:32.807335 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:40:32.969059 sshd[4849]: Connection closed by 147.75.109.163 port 43164 Dec 12 18:40:32.968859 sshd-session[4846]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:32.977003 systemd[1]: sshd@11-143.198.226.225:22-147.75.109.163:43164.service: Deactivated successfully. Dec 12 18:40:32.977530 systemd-logind[1510]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:40:32.980961 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:40:32.984318 systemd-logind[1510]: Removed session 12. Dec 12 18:40:36.591474 kubelet[2702]: E1212 18:40:36.591379 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7c9749d-j4f49" podUID="4b3492e4-bb68-4901-abef-c77777942b00" Dec 12 18:40:37.983196 systemd[1]: Started sshd@12-143.198.226.225:22-147.75.109.163:43166.service - OpenSSH per-connection server daemon (147.75.109.163:43166). Dec 12 18:40:37.998215 kubelet[2702]: E1212 18:40:37.997291 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:38.082462 sshd[4887]: Accepted publickey for core from 147.75.109.163 port 43166 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:38.085351 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:38.092199 systemd-logind[1510]: New session 13 of user core. Dec 12 18:40:38.097318 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:40:38.294552 sshd[4892]: Connection closed by 147.75.109.163 port 43166 Dec 12 18:40:38.295223 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:38.299427 systemd-logind[1510]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:40:38.299641 systemd[1]: sshd@12-143.198.226.225:22-147.75.109.163:43166.service: Deactivated successfully. Dec 12 18:40:38.303802 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:40:38.307762 systemd-logind[1510]: Removed session 13. Dec 12 18:40:38.592685 kubelet[2702]: E1212 18:40:38.592538 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6s6gc" podUID="b0d854b1-0e79-4d9c-914e-0384874af8c1" Dec 12 18:40:40.593536 kubelet[2702]: E1212 18:40:40.593333 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-686df6b99b-m5p6t" podUID="b68d5b78-37a9-4c51-a50e-f394db8ab487" Dec 12 18:40:41.594852 kubelet[2702]: E1212 18:40:41.594801 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2flfr" podUID="efd42e2e-3a4f-425a-9c07-184c94bcbd7e" Dec 12 18:40:42.590880 kubelet[2702]: E1212 18:40:42.590831 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:43.310235 systemd[1]: Started sshd@13-143.198.226.225:22-147.75.109.163:33392.service - OpenSSH per-connection server daemon (147.75.109.163:33392). Dec 12 18:40:43.380789 sshd[4907]: Accepted publickey for core from 147.75.109.163 port 33392 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:43.382788 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:43.393071 systemd-logind[1510]: New session 14 of user core. Dec 12 18:40:43.397371 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:40:43.556924 sshd[4910]: Connection closed by 147.75.109.163 port 33392 Dec 12 18:40:43.558280 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:43.571595 systemd[1]: sshd@13-143.198.226.225:22-147.75.109.163:33392.service: Deactivated successfully. Dec 12 18:40:43.575642 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:40:43.576899 systemd-logind[1510]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:40:43.580969 systemd-logind[1510]: Removed session 14. Dec 12 18:40:43.601327 kubelet[2702]: E1212 18:40:43.601265 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65df8ff88f-mf8qj" podUID="779cb8c2-ad5e-4f5f-95e6-a4da96b7be10" Dec 12 18:40:45.594256 kubelet[2702]: E1212 18:40:45.594201 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7c9749d-wbbnc" podUID="a92ba609-6740-41f9-b09b-6687ca1c0ede" Dec 12 18:40:48.575128 systemd[1]: Started sshd@14-143.198.226.225:22-147.75.109.163:33406.service - OpenSSH per-connection server daemon (147.75.109.163:33406). Dec 12 18:40:48.717644 sshd[4922]: Accepted publickey for core from 147.75.109.163 port 33406 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:48.722513 sshd-session[4922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:48.733377 systemd-logind[1510]: New session 15 of user core. Dec 12 18:40:48.740840 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:40:49.082609 sshd[4925]: Connection closed by 147.75.109.163 port 33406 Dec 12 18:40:49.083293 sshd-session[4922]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:49.089952 systemd[1]: sshd@14-143.198.226.225:22-147.75.109.163:33406.service: Deactivated successfully. Dec 12 18:40:49.096851 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:40:49.104751 systemd-logind[1510]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:40:49.108606 systemd-logind[1510]: Removed session 15. Dec 12 18:40:49.594754 containerd[1527]: time="2025-12-12T18:40:49.593695391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:40:49.961217 containerd[1527]: time="2025-12-12T18:40:49.960921759Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:49.962131 containerd[1527]: time="2025-12-12T18:40:49.962060769Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:40:49.962344 containerd[1527]: time="2025-12-12T18:40:49.962062466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:40:49.962657 kubelet[2702]: E1212 18:40:49.962520 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:40:49.963258 kubelet[2702]: E1212 18:40:49.962769 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:40:49.963672 kubelet[2702]: E1212 18:40:49.963563 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cwmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-6s6gc_calico-system(b0d854b1-0e79-4d9c-914e-0384874af8c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:49.965340 kubelet[2702]: E1212 18:40:49.965281 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6s6gc" podUID="b0d854b1-0e79-4d9c-914e-0384874af8c1" Dec 12 18:40:50.593866 containerd[1527]: time="2025-12-12T18:40:50.593798453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:40:50.961615 containerd[1527]: time="2025-12-12T18:40:50.961440827Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:50.962433 containerd[1527]: time="2025-12-12T18:40:50.962390918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:40:50.962522 containerd[1527]: time="2025-12-12T18:40:50.962488691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:40:50.962786 kubelet[2702]: E1212 18:40:50.962694 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:40:50.962786 kubelet[2702]: E1212 18:40:50.962764 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:40:50.963164 kubelet[2702]: E1212 18:40:50.963104 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pv5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cb7c9749d-j4f49_calico-apiserver(4b3492e4-bb68-4901-abef-c77777942b00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:50.964695 kubelet[2702]: E1212 18:40:50.964651 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7c9749d-j4f49" podUID="4b3492e4-bb68-4901-abef-c77777942b00" Dec 12 18:40:53.592008 kubelet[2702]: E1212 18:40:53.591936 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:53.594313 containerd[1527]: time="2025-12-12T18:40:53.594241588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:40:53.936041 containerd[1527]: time="2025-12-12T18:40:53.935717973Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:53.938096 containerd[1527]: time="2025-12-12T18:40:53.936847068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:40:53.939833 containerd[1527]: time="2025-12-12T18:40:53.939747088Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:40:53.940202 kubelet[2702]: E1212 18:40:53.940144 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:40:53.940329 kubelet[2702]: E1212 18:40:53.940222 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:40:53.940590 kubelet[2702]: E1212 18:40:53.940523 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lw94t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2flfr_calico-system(efd42e2e-3a4f-425a-9c07-184c94bcbd7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:53.942560 containerd[1527]: time="2025-12-12T18:40:53.942411288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:40:54.100375 systemd[1]: Started sshd@15-143.198.226.225:22-147.75.109.163:34454.service - OpenSSH per-connection server daemon (147.75.109.163:34454). Dec 12 18:40:54.200051 sshd[4942]: Accepted publickey for core from 147.75.109.163 port 34454 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:54.203067 sshd-session[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:54.212414 systemd-logind[1510]: New session 16 of user core. Dec 12 18:40:54.215843 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:40:54.303634 containerd[1527]: time="2025-12-12T18:40:54.303572565Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:54.305691 containerd[1527]: time="2025-12-12T18:40:54.305271624Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:40:54.305949 containerd[1527]: time="2025-12-12T18:40:54.305508769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:40:54.307167 kubelet[2702]: E1212 18:40:54.307101 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:40:54.307473 kubelet[2702]: E1212 18:40:54.307359 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:40:54.308295 kubelet[2702]: E1212 18:40:54.307811 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b5abad96921f40f3a1fb0a13899f51dc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8jw2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-686df6b99b-m5p6t_calico-system(b68d5b78-37a9-4c51-a50e-f394db8ab487): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:54.308873 containerd[1527]: time="2025-12-12T18:40:54.308485861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:40:54.582611 sshd[4945]: Connection closed by 147.75.109.163 port 34454 Dec 12 18:40:54.585360 sshd-session[4942]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:54.605191 systemd[1]: sshd@15-143.198.226.225:22-147.75.109.163:34454.service: Deactivated successfully. Dec 12 18:40:54.609524 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:40:54.612753 systemd-logind[1510]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:40:54.619461 systemd[1]: Started sshd@16-143.198.226.225:22-147.75.109.163:34458.service - OpenSSH per-connection server daemon (147.75.109.163:34458). Dec 12 18:40:54.622413 systemd-logind[1510]: Removed session 16. Dec 12 18:40:54.652059 containerd[1527]: time="2025-12-12T18:40:54.651698725Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:54.655002 containerd[1527]: time="2025-12-12T18:40:54.654340297Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:40:54.655218 containerd[1527]: time="2025-12-12T18:40:54.654399998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:40:54.655566 kubelet[2702]: E1212 18:40:54.655403 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:40:54.657449 kubelet[2702]: E1212 18:40:54.656104 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:40:54.657449 kubelet[2702]: E1212 18:40:54.656541 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lw94t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2flfr_calico-system(efd42e2e-3a4f-425a-9c07-184c94bcbd7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:54.658329 kubelet[2702]: E1212 18:40:54.657872 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2flfr" podUID="efd42e2e-3a4f-425a-9c07-184c94bcbd7e" Dec 12 18:40:54.659585 containerd[1527]: time="2025-12-12T18:40:54.658606908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:40:54.725055 sshd[4957]: Accepted publickey for core from 147.75.109.163 port 34458 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:54.726969 sshd-session[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:54.738120 systemd-logind[1510]: New session 17 of user core. Dec 12 18:40:54.744376 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:40:55.036595 containerd[1527]: time="2025-12-12T18:40:55.036174490Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:55.042055 containerd[1527]: time="2025-12-12T18:40:55.041912545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:40:55.042213 containerd[1527]: time="2025-12-12T18:40:55.042062244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:40:55.042640 kubelet[2702]: E1212 18:40:55.042576 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:40:55.044831 kubelet[2702]: E1212 18:40:55.042642 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:40:55.044831 kubelet[2702]: E1212 18:40:55.044423 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jw2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-686df6b99b-m5p6t_calico-system(b68d5b78-37a9-4c51-a50e-f394db8ab487): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:55.047020 kubelet[2702]: E1212 18:40:55.046942 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-686df6b99b-m5p6t" podUID="b68d5b78-37a9-4c51-a50e-f394db8ab487" Dec 12 18:40:55.138217 sshd[4960]: Connection closed by 147.75.109.163 port 34458 Dec 12 18:40:55.139021 sshd-session[4957]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:55.160339 systemd[1]: sshd@16-143.198.226.225:22-147.75.109.163:34458.service: Deactivated successfully. Dec 12 18:40:55.169887 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:40:55.173827 systemd-logind[1510]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:40:55.180960 systemd[1]: Started sshd@17-143.198.226.225:22-147.75.109.163:34474.service - OpenSSH per-connection server daemon (147.75.109.163:34474). Dec 12 18:40:55.183408 systemd-logind[1510]: Removed session 17. Dec 12 18:40:55.325319 sshd[4972]: Accepted publickey for core from 147.75.109.163 port 34474 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:55.328982 sshd-session[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:55.341112 systemd-logind[1510]: New session 18 of user core. Dec 12 18:40:55.346344 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:40:56.391169 sshd[4975]: Connection closed by 147.75.109.163 port 34474 Dec 12 18:40:56.393328 sshd-session[4972]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:56.408631 systemd[1]: sshd@17-143.198.226.225:22-147.75.109.163:34474.service: Deactivated successfully. Dec 12 18:40:56.413845 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:40:56.415834 systemd-logind[1510]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:40:56.424470 systemd[1]: Started sshd@18-143.198.226.225:22-147.75.109.163:34476.service - OpenSSH per-connection server daemon (147.75.109.163:34476). Dec 12 18:40:56.426667 systemd-logind[1510]: Removed session 18. Dec 12 18:40:56.530786 sshd[4991]: Accepted publickey for core from 147.75.109.163 port 34476 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:56.532467 sshd-session[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:56.542269 systemd-logind[1510]: New session 19 of user core. Dec 12 18:40:56.547325 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:40:56.593597 containerd[1527]: time="2025-12-12T18:40:56.593555438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:40:56.942898 containerd[1527]: time="2025-12-12T18:40:56.942702685Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:56.945324 containerd[1527]: time="2025-12-12T18:40:56.945173808Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:40:56.945324 containerd[1527]: time="2025-12-12T18:40:56.945287930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:40:56.946304 kubelet[2702]: E1212 18:40:56.946217 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:40:56.946304 kubelet[2702]: E1212 18:40:56.946288 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:40:56.946916 kubelet[2702]: E1212 18:40:56.946454 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rm2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-65df8ff88f-mf8qj_calico-system(779cb8c2-ad5e-4f5f-95e6-a4da96b7be10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:56.948981 kubelet[2702]: E1212 18:40:56.948092 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65df8ff88f-mf8qj" podUID="779cb8c2-ad5e-4f5f-95e6-a4da96b7be10" Dec 12 18:40:57.398489 sshd[4996]: Connection closed by 147.75.109.163 port 34476 Dec 12 18:40:57.399318 sshd-session[4991]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:57.414910 systemd[1]: sshd@18-143.198.226.225:22-147.75.109.163:34476.service: Deactivated successfully. Dec 12 18:40:57.421441 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:40:57.425871 systemd-logind[1510]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:40:57.432623 systemd[1]: Started sshd@19-143.198.226.225:22-147.75.109.163:34482.service - OpenSSH per-connection server daemon (147.75.109.163:34482). Dec 12 18:40:57.436256 systemd-logind[1510]: Removed session 19. Dec 12 18:40:57.553791 sshd[5006]: Accepted publickey for core from 147.75.109.163 port 34482 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:57.556142 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:57.566321 systemd-logind[1510]: New session 20 of user core. Dec 12 18:40:57.571318 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:40:57.769520 sshd[5010]: Connection closed by 147.75.109.163 port 34482 Dec 12 18:40:57.770358 sshd-session[5006]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:57.778139 systemd-logind[1510]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:40:57.779336 systemd[1]: sshd@19-143.198.226.225:22-147.75.109.163:34482.service: Deactivated successfully. Dec 12 18:40:57.783653 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:40:57.786056 systemd-logind[1510]: Removed session 20. Dec 12 18:41:00.591095 kubelet[2702]: E1212 18:41:00.590921 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:41:00.594510 containerd[1527]: time="2025-12-12T18:41:00.594447877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:41:00.917195 containerd[1527]: time="2025-12-12T18:41:00.916893970Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:41:00.917974 containerd[1527]: time="2025-12-12T18:41:00.917840109Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:41:00.917974 containerd[1527]: time="2025-12-12T18:41:00.917943030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:41:00.919917 kubelet[2702]: E1212 18:41:00.918261 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:41:00.919917 kubelet[2702]: E1212 18:41:00.918314 2702 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:41:00.919917 kubelet[2702]: E1212 18:41:00.918443 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxqg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cb7c9749d-wbbnc_calico-apiserver(a92ba609-6740-41f9-b09b-6687ca1c0ede): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:41:00.920455 kubelet[2702]: E1212 18:41:00.920405 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7c9749d-wbbnc" podUID="a92ba609-6740-41f9-b09b-6687ca1c0ede" Dec 12 18:41:01.593289 kubelet[2702]: E1212 18:41:01.592760 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6s6gc" podUID="b0d854b1-0e79-4d9c-914e-0384874af8c1" Dec 12 18:41:02.794424 systemd[1]: Started sshd@20-143.198.226.225:22-147.75.109.163:49560.service - OpenSSH per-connection server daemon (147.75.109.163:49560). Dec 12 18:41:02.873995 sshd[5025]: Accepted publickey for core from 147.75.109.163 port 49560 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:41:02.876891 sshd-session[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:02.887295 systemd-logind[1510]: New session 21 of user core. Dec 12 18:41:02.895297 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 18:41:03.142892 sshd[5028]: Connection closed by 147.75.109.163 port 49560 Dec 12 18:41:03.143368 sshd-session[5025]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:03.150944 systemd-logind[1510]: Session 21 logged out. Waiting for processes to exit. Dec 12 18:41:03.151849 systemd[1]: sshd@20-143.198.226.225:22-147.75.109.163:49560.service: Deactivated successfully. Dec 12 18:41:03.156566 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 18:41:03.161851 systemd-logind[1510]: Removed session 21. Dec 12 18:41:05.595011 kubelet[2702]: E1212 18:41:05.594958 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7c9749d-j4f49" podUID="4b3492e4-bb68-4901-abef-c77777942b00" Dec 12 18:41:06.593639 kubelet[2702]: E1212 18:41:06.593479 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-686df6b99b-m5p6t" podUID="b68d5b78-37a9-4c51-a50e-f394db8ab487" Dec 12 18:41:07.596496 kubelet[2702]: E1212 18:41:07.596439 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65df8ff88f-mf8qj" podUID="779cb8c2-ad5e-4f5f-95e6-a4da96b7be10" Dec 12 18:41:08.159813 systemd[1]: Started sshd@21-143.198.226.225:22-147.75.109.163:49570.service - OpenSSH per-connection server daemon (147.75.109.163:49570). Dec 12 18:41:08.258616 sshd[5067]: Accepted publickey for core from 147.75.109.163 port 49570 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:41:08.262326 sshd-session[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:08.273472 systemd-logind[1510]: New session 22 of user core. Dec 12 18:41:08.279321 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 18:41:08.562415 sshd[5070]: Connection closed by 147.75.109.163 port 49570 Dec 12 18:41:08.562832 sshd-session[5067]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:08.570525 systemd-logind[1510]: Session 22 logged out. Waiting for processes to exit. Dec 12 18:41:08.570612 systemd[1]: sshd@21-143.198.226.225:22-147.75.109.163:49570.service: Deactivated successfully. Dec 12 18:41:08.574149 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 18:41:08.579288 systemd-logind[1510]: Removed session 22. Dec 12 18:41:09.597734 kubelet[2702]: E1212 18:41:09.597686 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2flfr" podUID="efd42e2e-3a4f-425a-9c07-184c94bcbd7e" Dec 12 18:41:12.593458 kubelet[2702]: E1212 18:41:12.593384 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7c9749d-wbbnc" podUID="a92ba609-6740-41f9-b09b-6687ca1c0ede" Dec 12 18:41:13.588314 systemd[1]: Started sshd@22-143.198.226.225:22-147.75.109.163:51212.service - OpenSSH per-connection server daemon (147.75.109.163:51212). Dec 12 18:41:13.756102 sshd[5082]: Accepted publickey for core from 147.75.109.163 port 51212 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:41:13.761603 sshd-session[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:13.770676 systemd-logind[1510]: New session 23 of user core. Dec 12 18:41:13.774752 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 18:41:14.074134 sshd[5085]: Connection closed by 147.75.109.163 port 51212 Dec 12 18:41:14.075242 sshd-session[5082]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:14.082565 systemd[1]: sshd@22-143.198.226.225:22-147.75.109.163:51212.service: Deactivated successfully. Dec 12 18:41:14.086637 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 18:41:14.088854 systemd-logind[1510]: Session 23 logged out. Waiting for processes to exit. Dec 12 18:41:14.093293 systemd-logind[1510]: Removed session 23. Dec 12 18:41:16.593330 kubelet[2702]: E1212 18:41:16.592852 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6s6gc" podUID="b0d854b1-0e79-4d9c-914e-0384874af8c1"