Nov 4 04:57:51.897558 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 03:00:51 -00 2025 Nov 4 04:57:51.897595 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 04:57:51.897611 kernel: BIOS-provided physical RAM map: Nov 4 04:57:51.897618 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 4 04:57:51.897625 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 4 04:57:51.897632 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 4 04:57:51.897641 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 4 04:57:51.897652 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 4 04:57:51.897660 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 4 04:57:51.897667 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 4 04:57:51.897678 kernel: NX (Execute Disable) protection: active Nov 4 04:57:51.897685 kernel: APIC: Static calls initialized Nov 4 04:57:51.897693 kernel: SMBIOS 2.8 present. Nov 4 04:57:51.897701 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 4 04:57:51.897711 kernel: DMI: Memory slots populated: 1/1 Nov 4 04:57:51.897722 kernel: Hypervisor detected: KVM Nov 4 04:57:51.897734 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 4 04:57:51.897743 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 4 04:57:51.900446 kernel: kvm-clock: using sched offset of 3874222743 cycles Nov 4 04:57:51.900464 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 4 04:57:51.900474 kernel: tsc: Detected 2494.140 MHz processor Nov 4 04:57:51.900484 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 4 04:57:51.900494 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 4 04:57:51.900517 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 4 04:57:51.900528 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 4 04:57:51.900537 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 4 04:57:51.900546 kernel: ACPI: Early table checksum verification disabled Nov 4 04:57:51.900555 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 4 04:57:51.900564 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:57:51.900573 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:57:51.900588 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:57:51.900597 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 4 04:57:51.900605 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:57:51.900614 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:57:51.900623 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:57:51.900632 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:57:51.900641 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 4 04:57:51.900655 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 4 04:57:51.900664 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 4 04:57:51.900673 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 4 04:57:51.900688 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 4 04:57:51.900697 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 4 04:57:51.900706 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 4 04:57:51.900720 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 4 04:57:51.900730 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 4 04:57:51.900739 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Nov 4 04:57:51.900748 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Nov 4 04:57:51.900758 kernel: Zone ranges: Nov 4 04:57:51.900767 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 4 04:57:51.900784 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 4 04:57:51.900793 kernel: Normal empty Nov 4 04:57:51.900803 kernel: Device empty Nov 4 04:57:51.900812 kernel: Movable zone start for each node Nov 4 04:57:51.900821 kernel: Early memory node ranges Nov 4 04:57:51.900831 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 4 04:57:51.900840 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 4 04:57:51.900849 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 4 04:57:51.900863 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 04:57:51.900873 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 4 04:57:51.900882 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 4 04:57:51.900892 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 4 04:57:51.900904 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 4 04:57:51.900913 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 4 04:57:51.900925 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 4 04:57:51.900939 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 4 04:57:51.900948 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 4 04:57:51.900961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 4 04:57:51.900970 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 4 04:57:51.900979 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 4 04:57:51.900989 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 4 04:57:51.900999 kernel: TSC deadline timer available Nov 4 04:57:51.901013 kernel: CPU topo: Max. logical packages: 1 Nov 4 04:57:51.901022 kernel: CPU topo: Max. logical dies: 1 Nov 4 04:57:51.901031 kernel: CPU topo: Max. dies per package: 1 Nov 4 04:57:51.901040 kernel: CPU topo: Max. threads per core: 1 Nov 4 04:57:51.901049 kernel: CPU topo: Num. cores per package: 2 Nov 4 04:57:51.901059 kernel: CPU topo: Num. threads per package: 2 Nov 4 04:57:51.901067 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 4 04:57:51.901076 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 4 04:57:51.901091 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 4 04:57:51.901100 kernel: Booting paravirtualized kernel on KVM Nov 4 04:57:51.901109 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 4 04:57:51.901118 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 4 04:57:51.901127 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 4 04:57:51.901137 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 4 04:57:51.901146 kernel: pcpu-alloc: [0] 0 1 Nov 4 04:57:51.901161 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 4 04:57:51.901172 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 04:57:51.901181 kernel: random: crng init done Nov 4 04:57:51.901190 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 04:57:51.901200 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 4 04:57:51.901209 kernel: Fallback order for Node 0: 0 Nov 4 04:57:51.901223 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Nov 4 04:57:51.901232 kernel: Policy zone: DMA32 Nov 4 04:57:51.901241 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 04:57:51.901250 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 4 04:57:51.901259 kernel: Kernel/User page tables isolation: enabled Nov 4 04:57:51.901268 kernel: ftrace: allocating 40092 entries in 157 pages Nov 4 04:57:51.901277 kernel: ftrace: allocated 157 pages with 5 groups Nov 4 04:57:51.901286 kernel: Dynamic Preempt: voluntary Nov 4 04:57:51.901301 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 04:57:51.901312 kernel: rcu: RCU event tracing is enabled. Nov 4 04:57:51.901321 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 4 04:57:51.901330 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 04:57:51.901339 kernel: Rude variant of Tasks RCU enabled. Nov 4 04:57:51.901348 kernel: Tracing variant of Tasks RCU enabled. Nov 4 04:57:51.901358 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 04:57:51.901373 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 4 04:57:51.901382 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 04:57:51.904464 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 04:57:51.904487 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 04:57:51.904497 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 4 04:57:51.904507 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 04:57:51.904516 kernel: Console: colour VGA+ 80x25 Nov 4 04:57:51.904526 kernel: printk: legacy console [tty0] enabled Nov 4 04:57:51.904551 kernel: printk: legacy console [ttyS0] enabled Nov 4 04:57:51.904561 kernel: ACPI: Core revision 20240827 Nov 4 04:57:51.904571 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 4 04:57:51.904596 kernel: APIC: Switch to symmetric I/O mode setup Nov 4 04:57:51.904611 kernel: x2apic enabled Nov 4 04:57:51.904621 kernel: APIC: Switched APIC routing to: physical x2apic Nov 4 04:57:51.904631 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 4 04:57:51.904642 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Nov 4 04:57:51.904655 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Nov 4 04:57:51.904670 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 4 04:57:51.904680 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 4 04:57:51.904690 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 4 04:57:51.904700 kernel: Spectre V2 : Mitigation: Retpolines Nov 4 04:57:51.904715 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 4 04:57:51.904725 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 4 04:57:51.904735 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 4 04:57:51.904745 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 4 04:57:51.904755 kernel: MDS: Mitigation: Clear CPU buffers Nov 4 04:57:51.904765 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 4 04:57:51.904776 kernel: active return thunk: its_return_thunk Nov 4 04:57:51.904790 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 4 04:57:51.904800 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 4 04:57:51.904810 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 4 04:57:51.904819 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 4 04:57:51.904829 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 4 04:57:51.904839 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 4 04:57:51.904849 kernel: Freeing SMP alternatives memory: 32K Nov 4 04:57:51.904865 kernel: pid_max: default: 32768 minimum: 301 Nov 4 04:57:51.904875 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 04:57:51.904884 kernel: landlock: Up and running. Nov 4 04:57:51.904894 kernel: SELinux: Initializing. Nov 4 04:57:51.904904 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 4 04:57:51.904914 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 4 04:57:51.904924 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 4 04:57:51.904939 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 4 04:57:51.904949 kernel: signal: max sigframe size: 1776 Nov 4 04:57:51.904959 kernel: rcu: Hierarchical SRCU implementation. Nov 4 04:57:51.904969 kernel: rcu: Max phase no-delay instances is 400. Nov 4 04:57:51.904979 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 04:57:51.904989 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 4 04:57:51.904998 kernel: smp: Bringing up secondary CPUs ... Nov 4 04:57:51.905015 kernel: smpboot: x86: Booting SMP configuration: Nov 4 04:57:51.905026 kernel: .... node #0, CPUs: #1 Nov 4 04:57:51.905035 kernel: smp: Brought up 1 node, 2 CPUs Nov 4 04:57:51.905045 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Nov 4 04:57:51.905056 kernel: Memory: 1985340K/2096612K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15360K init, 2684K bss, 106708K reserved, 0K cma-reserved) Nov 4 04:57:51.905066 kernel: devtmpfs: initialized Nov 4 04:57:51.905075 kernel: x86/mm: Memory block size: 128MB Nov 4 04:57:51.905090 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 04:57:51.905100 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 4 04:57:51.905110 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 04:57:51.905119 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 04:57:51.905129 kernel: audit: initializing netlink subsys (disabled) Nov 4 04:57:51.905139 kernel: audit: type=2000 audit(1762232268.806:1): state=initialized audit_enabled=0 res=1 Nov 4 04:57:51.905148 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 04:57:51.905164 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 4 04:57:51.905173 kernel: cpuidle: using governor menu Nov 4 04:57:51.905183 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 04:57:51.905193 kernel: dca service started, version 1.12.1 Nov 4 04:57:51.905202 kernel: PCI: Using configuration type 1 for base access Nov 4 04:57:51.905212 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 4 04:57:51.905222 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 04:57:51.905237 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 04:57:51.905246 kernel: ACPI: Added _OSI(Module Device) Nov 4 04:57:51.905256 kernel: ACPI: Added _OSI(Processor Device) Nov 4 04:57:51.905266 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 04:57:51.905276 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 04:57:51.905286 kernel: ACPI: Interpreter enabled Nov 4 04:57:51.905295 kernel: ACPI: PM: (supports S0 S5) Nov 4 04:57:51.905310 kernel: ACPI: Using IOAPIC for interrupt routing Nov 4 04:57:51.905319 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 4 04:57:51.905329 kernel: PCI: Using E820 reservations for host bridge windows Nov 4 04:57:51.905339 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 4 04:57:51.905349 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 4 04:57:51.905712 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 4 04:57:51.905905 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 4 04:57:51.906059 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 4 04:57:51.906072 kernel: acpiphp: Slot [3] registered Nov 4 04:57:51.906083 kernel: acpiphp: Slot [4] registered Nov 4 04:57:51.906092 kernel: acpiphp: Slot [5] registered Nov 4 04:57:51.906102 kernel: acpiphp: Slot [6] registered Nov 4 04:57:51.906112 kernel: acpiphp: Slot [7] registered Nov 4 04:57:51.906129 kernel: acpiphp: Slot [8] registered Nov 4 04:57:51.906140 kernel: acpiphp: Slot [9] registered Nov 4 04:57:51.906149 kernel: acpiphp: Slot [10] registered Nov 4 04:57:51.906164 kernel: acpiphp: Slot [11] registered Nov 4 04:57:51.906178 kernel: acpiphp: Slot [12] registered Nov 4 04:57:51.906193 kernel: acpiphp: Slot [13] registered Nov 4 04:57:51.906208 kernel: acpiphp: Slot [14] registered Nov 4 04:57:51.906224 kernel: acpiphp: Slot [15] registered Nov 4 04:57:51.906243 kernel: acpiphp: Slot [16] registered Nov 4 04:57:51.906253 kernel: acpiphp: Slot [17] registered Nov 4 04:57:51.906263 kernel: acpiphp: Slot [18] registered Nov 4 04:57:51.906273 kernel: acpiphp: Slot [19] registered Nov 4 04:57:51.906282 kernel: acpiphp: Slot [20] registered Nov 4 04:57:51.906292 kernel: acpiphp: Slot [21] registered Nov 4 04:57:51.906302 kernel: acpiphp: Slot [22] registered Nov 4 04:57:51.906317 kernel: acpiphp: Slot [23] registered Nov 4 04:57:51.906327 kernel: acpiphp: Slot [24] registered Nov 4 04:57:51.906337 kernel: acpiphp: Slot [25] registered Nov 4 04:57:51.906346 kernel: acpiphp: Slot [26] registered Nov 4 04:57:51.907484 kernel: acpiphp: Slot [27] registered Nov 4 04:57:51.907497 kernel: acpiphp: Slot [28] registered Nov 4 04:57:51.907507 kernel: acpiphp: Slot [29] registered Nov 4 04:57:51.907534 kernel: acpiphp: Slot [30] registered Nov 4 04:57:51.907544 kernel: acpiphp: Slot [31] registered Nov 4 04:57:51.907555 kernel: PCI host bridge to bus 0000:00 Nov 4 04:57:51.907761 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 4 04:57:51.907885 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 4 04:57:51.908005 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 4 04:57:51.908123 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 4 04:57:51.908253 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 4 04:57:51.908371 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 4 04:57:51.908547 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Nov 4 04:57:51.908690 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Nov 4 04:57:51.908832 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Nov 4 04:57:51.908976 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Nov 4 04:57:51.909114 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Nov 4 04:57:51.909245 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Nov 4 04:57:51.909376 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Nov 4 04:57:51.917685 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Nov 4 04:57:51.917903 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Nov 4 04:57:51.918040 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Nov 4 04:57:51.918181 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Nov 4 04:57:51.918313 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 4 04:57:51.918490 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 4 04:57:51.918632 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Nov 4 04:57:51.918776 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Nov 4 04:57:51.918906 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Nov 4 04:57:51.919036 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Nov 4 04:57:51.919164 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Nov 4 04:57:51.919293 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 4 04:57:51.920543 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 4 04:57:51.920757 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Nov 4 04:57:51.921002 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Nov 4 04:57:51.921215 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Nov 4 04:57:51.921359 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 4 04:57:51.922597 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Nov 4 04:57:51.922765 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Nov 4 04:57:51.922895 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 4 04:57:51.923038 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 4 04:57:51.923171 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Nov 4 04:57:51.923301 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Nov 4 04:57:51.925517 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 4 04:57:51.925706 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 4 04:57:51.925839 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Nov 4 04:57:51.925987 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Nov 4 04:57:51.926156 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Nov 4 04:57:51.926298 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 4 04:57:51.926552 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Nov 4 04:57:51.926784 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Nov 4 04:57:51.926970 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Nov 4 04:57:51.927122 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Nov 4 04:57:51.927253 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Nov 4 04:57:51.928557 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 4 04:57:51.928598 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 4 04:57:51.928618 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 4 04:57:51.928636 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 4 04:57:51.928653 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 4 04:57:51.928671 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 4 04:57:51.928689 kernel: iommu: Default domain type: Translated Nov 4 04:57:51.928726 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 4 04:57:51.928744 kernel: PCI: Using ACPI for IRQ routing Nov 4 04:57:51.928762 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 4 04:57:51.928779 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 4 04:57:51.928797 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 4 04:57:51.929069 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 4 04:57:51.929299 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 4 04:57:51.929568 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 4 04:57:51.929592 kernel: vgaarb: loaded Nov 4 04:57:51.929610 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 4 04:57:51.929627 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 4 04:57:51.929645 kernel: clocksource: Switched to clocksource kvm-clock Nov 4 04:57:51.929663 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 04:57:51.929680 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 04:57:51.929713 kernel: pnp: PnP ACPI init Nov 4 04:57:51.929731 kernel: pnp: PnP ACPI: found 4 devices Nov 4 04:57:51.929750 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 4 04:57:51.929767 kernel: NET: Registered PF_INET protocol family Nov 4 04:57:51.929785 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 04:57:51.929803 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 4 04:57:51.929821 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 04:57:51.929847 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 4 04:57:51.929865 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 4 04:57:51.929884 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 4 04:57:51.929901 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 4 04:57:51.929919 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 4 04:57:51.929936 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 04:57:51.929954 kernel: NET: Registered PF_XDP protocol family Nov 4 04:57:51.930201 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 4 04:57:51.930594 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 4 04:57:51.930808 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 4 04:57:51.931006 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 4 04:57:51.931205 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 4 04:57:51.931493 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 4 04:57:51.931729 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 4 04:57:51.931778 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 4 04:57:51.932010 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 25069 usecs Nov 4 04:57:51.932033 kernel: PCI: CLS 0 bytes, default 64 Nov 4 04:57:51.932053 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 4 04:57:51.932072 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Nov 4 04:57:51.932090 kernel: Initialise system trusted keyrings Nov 4 04:57:51.932123 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 4 04:57:51.932141 kernel: Key type asymmetric registered Nov 4 04:57:51.932158 kernel: Asymmetric key parser 'x509' registered Nov 4 04:57:51.932176 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 4 04:57:51.932193 kernel: io scheduler mq-deadline registered Nov 4 04:57:51.932208 kernel: io scheduler kyber registered Nov 4 04:57:51.932225 kernel: io scheduler bfq registered Nov 4 04:57:51.932244 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 4 04:57:51.932272 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 4 04:57:51.932290 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 4 04:57:51.932308 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 4 04:57:51.932326 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 04:57:51.932344 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 04:57:51.932361 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 4 04:57:51.932379 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 4 04:57:51.932421 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 4 04:57:51.932691 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 4 04:57:51.932717 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 4 04:57:51.932927 kernel: rtc_cmos 00:03: registered as rtc0 Nov 4 04:57:51.933137 kernel: rtc_cmos 00:03: setting system clock to 2025-11-04T04:57:50 UTC (1762232270) Nov 4 04:57:51.933346 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 4 04:57:51.933388 kernel: intel_pstate: CPU model not supported Nov 4 04:57:51.933424 kernel: NET: Registered PF_INET6 protocol family Nov 4 04:57:51.933443 kernel: Segment Routing with IPv6 Nov 4 04:57:51.933463 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 04:57:51.933481 kernel: NET: Registered PF_PACKET protocol family Nov 4 04:57:51.933498 kernel: Key type dns_resolver registered Nov 4 04:57:51.933516 kernel: IPI shorthand broadcast: enabled Nov 4 04:57:51.933544 kernel: sched_clock: Marking stable (1789087063, 143726387)->(1957823002, -25009552) Nov 4 04:57:51.933562 kernel: registered taskstats version 1 Nov 4 04:57:51.933580 kernel: Loading compiled-in X.509 certificates Nov 4 04:57:51.933598 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: dafbe857b8ef9eaad4381fdddb57853ce023547e' Nov 4 04:57:51.933615 kernel: Demotion targets for Node 0: null Nov 4 04:57:51.933632 kernel: Key type .fscrypt registered Nov 4 04:57:51.933650 kernel: Key type fscrypt-provisioning registered Nov 4 04:57:51.933718 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 04:57:51.933739 kernel: ima: Allocated hash algorithm: sha1 Nov 4 04:57:51.933757 kernel: ima: No architecture policies found Nov 4 04:57:51.933776 kernel: clk: Disabling unused clocks Nov 4 04:57:51.933795 kernel: Freeing unused kernel image (initmem) memory: 15360K Nov 4 04:57:51.933814 kernel: Write protecting the kernel read-only data: 45056k Nov 4 04:57:51.933833 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Nov 4 04:57:51.933858 kernel: Run /init as init process Nov 4 04:57:51.933878 kernel: with arguments: Nov 4 04:57:51.933896 kernel: /init Nov 4 04:57:51.933915 kernel: with environment: Nov 4 04:57:51.933933 kernel: HOME=/ Nov 4 04:57:51.933951 kernel: TERM=linux Nov 4 04:57:51.933969 kernel: SCSI subsystem initialized Nov 4 04:57:51.933987 kernel: libata version 3.00 loaded. Nov 4 04:57:51.934250 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 4 04:57:51.934546 kernel: scsi host0: ata_piix Nov 4 04:57:51.934794 kernel: scsi host1: ata_piix Nov 4 04:57:51.934822 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Nov 4 04:57:51.934841 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Nov 4 04:57:51.934878 kernel: ACPI: bus type USB registered Nov 4 04:57:51.934897 kernel: usbcore: registered new interface driver usbfs Nov 4 04:57:51.934916 kernel: usbcore: registered new interface driver hub Nov 4 04:57:51.934934 kernel: usbcore: registered new device driver usb Nov 4 04:57:51.935167 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 4 04:57:51.935411 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 4 04:57:51.935641 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 4 04:57:51.935893 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 4 04:57:51.936165 kernel: hub 1-0:1.0: USB hub found Nov 4 04:57:51.936430 kernel: hub 1-0:1.0: 2 ports detected Nov 4 04:57:51.936710 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 4 04:57:51.936938 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 4 04:57:51.936963 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 4 04:57:51.936982 kernel: GPT:16515071 != 125829119 Nov 4 04:57:51.937000 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 4 04:57:51.937019 kernel: GPT:16515071 != 125829119 Nov 4 04:57:51.937052 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 4 04:57:51.937080 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 4 04:57:51.937331 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 4 04:57:51.937586 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 4 04:57:51.937827 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Nov 4 04:57:51.938093 kernel: scsi host2: Virtio SCSI HBA Nov 4 04:57:51.938134 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 04:57:51.938154 kernel: device-mapper: uevent: version 1.0.3 Nov 4 04:57:51.938173 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 04:57:51.938191 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 4 04:57:51.938207 kernel: raid6: avx2x4 gen() 14813 MB/s Nov 4 04:57:51.938226 kernel: raid6: avx2x2 gen() 14888 MB/s Nov 4 04:57:51.938244 kernel: raid6: avx2x1 gen() 11882 MB/s Nov 4 04:57:51.938272 kernel: raid6: using algorithm avx2x2 gen() 14888 MB/s Nov 4 04:57:51.938291 kernel: raid6: .... xor() 12585 MB/s, rmw enabled Nov 4 04:57:51.938310 kernel: raid6: using avx2x2 recovery algorithm Nov 4 04:57:51.938328 kernel: xor: automatically using best checksumming function avx Nov 4 04:57:51.938359 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 04:57:51.938379 kernel: BTRFS: device fsid 6f0a5369-79b6-4a87-b9a6-85ec05be306c devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (162) Nov 4 04:57:51.938416 kernel: BTRFS info (device dm-0): first mount of filesystem 6f0a5369-79b6-4a87-b9a6-85ec05be306c Nov 4 04:57:51.938447 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 4 04:57:51.938465 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 04:57:51.938493 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 04:57:51.938512 kernel: loop: module loaded Nov 4 04:57:51.938530 kernel: loop0: detected capacity change from 0 to 100136 Nov 4 04:57:51.938549 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 04:57:51.938571 systemd[1]: Successfully made /usr/ read-only. Nov 4 04:57:51.938603 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 04:57:51.938624 systemd[1]: Detected virtualization kvm. Nov 4 04:57:51.938643 systemd[1]: Detected architecture x86-64. Nov 4 04:57:51.938661 systemd[1]: Running in initrd. Nov 4 04:57:51.938680 systemd[1]: No hostname configured, using default hostname. Nov 4 04:57:51.938707 systemd[1]: Hostname set to . Nov 4 04:57:51.938726 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 04:57:51.938745 systemd[1]: Queued start job for default target initrd.target. Nov 4 04:57:51.938764 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 04:57:51.938784 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 04:57:51.938804 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 04:57:51.938825 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 04:57:51.938853 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 04:57:51.938873 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 04:57:51.938893 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 04:57:51.938913 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 04:57:51.938932 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 04:57:51.938951 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 04:57:51.938977 systemd[1]: Reached target paths.target - Path Units. Nov 4 04:57:51.938997 systemd[1]: Reached target slices.target - Slice Units. Nov 4 04:57:51.939016 systemd[1]: Reached target swap.target - Swaps. Nov 4 04:57:51.939034 systemd[1]: Reached target timers.target - Timer Units. Nov 4 04:57:51.939054 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 04:57:51.939073 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 04:57:51.939092 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 04:57:51.939119 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 04:57:51.939139 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 04:57:51.939158 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 04:57:51.939176 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 04:57:51.939192 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 04:57:51.939208 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 04:57:51.939230 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 04:57:51.939246 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 04:57:51.939260 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 04:57:51.939277 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 04:57:51.939295 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 04:57:51.939314 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 04:57:51.939334 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 04:57:51.939362 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:57:51.939384 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 04:57:51.939423 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 04:57:51.939453 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 04:57:51.939484 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 04:57:51.939579 systemd-journald[298]: Collecting audit messages is disabled. Nov 4 04:57:51.939623 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 04:57:51.939652 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 04:57:51.939673 kernel: Bridge firewalling registered Nov 4 04:57:51.939694 systemd-journald[298]: Journal started Nov 4 04:57:51.939731 systemd-journald[298]: Runtime Journal (/run/log/journal/d47b931234f6435090a0b67482567152) is 4.8M, max 39.1M, 34.2M free. Nov 4 04:57:51.926899 systemd-modules-load[299]: Inserted module 'br_netfilter' Nov 4 04:57:51.982426 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 04:57:51.984032 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 04:57:51.985226 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:57:51.991007 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 04:57:51.993635 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 04:57:51.996695 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 04:57:52.002719 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 04:57:52.021140 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 04:57:52.029729 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 04:57:52.034017 systemd-tmpfiles[318]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 04:57:52.041833 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 04:57:52.043095 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 04:57:52.045771 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 04:57:52.057163 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 04:57:52.093165 dracut-cmdline[340]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 04:57:52.111015 systemd-resolved[331]: Positive Trust Anchors: Nov 4 04:57:52.111030 systemd-resolved[331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 04:57:52.111034 systemd-resolved[331]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 04:57:52.111072 systemd-resolved[331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 04:57:52.135221 systemd-resolved[331]: Defaulting to hostname 'linux'. Nov 4 04:57:52.137405 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 04:57:52.138108 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 04:57:52.223450 kernel: Loading iSCSI transport class v2.0-870. Nov 4 04:57:52.240447 kernel: iscsi: registered transport (tcp) Nov 4 04:57:52.267451 kernel: iscsi: registered transport (qla4xxx) Nov 4 04:57:52.267554 kernel: QLogic iSCSI HBA Driver Nov 4 04:57:52.300716 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 04:57:52.325094 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 04:57:52.327595 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 04:57:52.385283 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 04:57:52.388873 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 04:57:52.390860 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 04:57:52.431679 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 04:57:52.435357 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 04:57:52.471937 systemd-udevd[577]: Using default interface naming scheme 'v257'. Nov 4 04:57:52.486525 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 04:57:52.490464 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 04:57:52.521310 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 04:57:52.523124 dracut-pre-trigger[654]: rd.md=0: removing MD RAID activation Nov 4 04:57:52.527689 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 04:57:52.560146 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 04:57:52.568607 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 04:57:52.584985 systemd-networkd[691]: lo: Link UP Nov 4 04:57:52.584994 systemd-networkd[691]: lo: Gained carrier Nov 4 04:57:52.588772 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 04:57:52.589333 systemd[1]: Reached target network.target - Network. Nov 4 04:57:52.652250 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 04:57:52.656831 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 04:57:52.756158 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 4 04:57:52.785064 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 4 04:57:52.802928 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 04:57:52.811185 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 4 04:57:52.814571 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 04:57:52.836380 disk-uuid[748]: Primary Header is updated. Nov 4 04:57:52.836380 disk-uuid[748]: Secondary Entries is updated. Nov 4 04:57:52.836380 disk-uuid[748]: Secondary Header is updated. Nov 4 04:57:52.843755 kernel: cryptd: max_cpu_qlen set to 1000 Nov 4 04:57:52.904473 kernel: AES CTR mode by8 optimization enabled Nov 4 04:57:52.915633 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 04:57:52.916697 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:57:52.918842 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:57:52.931780 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 4 04:57:52.929277 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:57:52.981961 systemd-networkd[691]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Nov 4 04:57:52.981974 systemd-networkd[691]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 4 04:57:52.983696 systemd-networkd[691]: eth0: Link UP Nov 4 04:57:52.983890 systemd-networkd[691]: eth0: Gained carrier Nov 4 04:57:52.983904 systemd-networkd[691]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Nov 4 04:57:52.991751 systemd-networkd[691]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 04:57:52.991764 systemd-networkd[691]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 04:57:52.992565 systemd-networkd[691]: eth1: Link UP Nov 4 04:57:52.992758 systemd-networkd[691]: eth1: Gained carrier Nov 4 04:57:52.992772 systemd-networkd[691]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 04:57:53.004446 systemd-networkd[691]: eth0: DHCPv4 address 164.92.104.185/19, gateway 164.92.96.1 acquired from 169.254.169.253 Nov 4 04:57:53.013160 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 04:57:53.013495 systemd-networkd[691]: eth1: DHCPv4 address 10.124.0.26/20 acquired from 169.254.169.253 Nov 4 04:57:53.075253 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:57:53.078072 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 04:57:53.078870 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 04:57:53.080146 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 04:57:53.083215 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 04:57:53.112914 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 04:57:53.882684 disk-uuid[749]: Warning: The kernel is still using the old partition table. Nov 4 04:57:53.882684 disk-uuid[749]: The new table will be used at the next reboot or after you Nov 4 04:57:53.882684 disk-uuid[749]: run partprobe(8) or kpartx(8) Nov 4 04:57:53.882684 disk-uuid[749]: The operation has completed successfully. Nov 4 04:57:53.891060 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 04:57:53.891200 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 04:57:53.893520 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 04:57:53.930469 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (835) Nov 4 04:57:53.930586 kernel: BTRFS info (device vda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:57:53.933080 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 04:57:53.939687 kernel: BTRFS info (device vda6): turning on async discard Nov 4 04:57:53.939790 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 04:57:53.948443 kernel: BTRFS info (device vda6): last unmount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:57:53.949533 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 04:57:53.952406 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 04:57:54.155880 ignition[854]: Ignition 2.22.0 Nov 4 04:57:54.155905 ignition[854]: Stage: fetch-offline Nov 4 04:57:54.155947 ignition[854]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:57:54.155957 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 04:57:54.156076 ignition[854]: parsed url from cmdline: "" Nov 4 04:57:54.159952 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 04:57:54.156081 ignition[854]: no config URL provided Nov 4 04:57:54.156087 ignition[854]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 04:57:54.156097 ignition[854]: no config at "/usr/lib/ignition/user.ign" Nov 4 04:57:54.156103 ignition[854]: failed to fetch config: resource requires networking Nov 4 04:57:54.156503 ignition[854]: Ignition finished successfully Nov 4 04:57:54.163602 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 4 04:57:54.208365 ignition[861]: Ignition 2.22.0 Nov 4 04:57:54.208380 ignition[861]: Stage: fetch Nov 4 04:57:54.208561 ignition[861]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:57:54.208572 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 04:57:54.208717 ignition[861]: parsed url from cmdline: "" Nov 4 04:57:54.208723 ignition[861]: no config URL provided Nov 4 04:57:54.208731 ignition[861]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 04:57:54.208742 ignition[861]: no config at "/usr/lib/ignition/user.ign" Nov 4 04:57:54.208778 ignition[861]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 4 04:57:54.222685 ignition[861]: GET result: OK Nov 4 04:57:54.222888 ignition[861]: parsing config with SHA512: 6688ba95949ebca3f38259284d9d979e8c4331474c79a2f731fd852fee6ab2c228b9b82be5ef0be2e73f8a902a9e030a315ddaee5397b1bde275b4b07340a9af Nov 4 04:57:54.232350 unknown[861]: fetched base config from "system" Nov 4 04:57:54.232363 unknown[861]: fetched base config from "system" Nov 4 04:57:54.232722 ignition[861]: fetch: fetch complete Nov 4 04:57:54.232370 unknown[861]: fetched user config from "digitalocean" Nov 4 04:57:54.232728 ignition[861]: fetch: fetch passed Nov 4 04:57:54.234755 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 4 04:57:54.232782 ignition[861]: Ignition finished successfully Nov 4 04:57:54.237101 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 04:57:54.278256 ignition[867]: Ignition 2.22.0 Nov 4 04:57:54.279240 ignition[867]: Stage: kargs Nov 4 04:57:54.279497 ignition[867]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:57:54.279512 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 04:57:54.282844 ignition[867]: kargs: kargs passed Nov 4 04:57:54.283349 ignition[867]: Ignition finished successfully Nov 4 04:57:54.284963 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 04:57:54.287705 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 04:57:54.294830 systemd-networkd[691]: eth1: Gained IPv6LL Nov 4 04:57:54.333618 ignition[873]: Ignition 2.22.0 Nov 4 04:57:54.333630 ignition[873]: Stage: disks Nov 4 04:57:54.333827 ignition[873]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:57:54.333838 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 04:57:54.335544 ignition[873]: disks: disks passed Nov 4 04:57:54.335616 ignition[873]: Ignition finished successfully Nov 4 04:57:54.336995 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 04:57:54.342101 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 04:57:54.343355 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 04:57:54.344465 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 04:57:54.345524 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 04:57:54.346453 systemd[1]: Reached target basic.target - Basic System. Nov 4 04:57:54.349004 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 04:57:54.388476 systemd-fsck[882]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 4 04:57:54.392535 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 04:57:54.395352 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 04:57:54.528417 kernel: EXT4-fs (vda9): mounted filesystem c35327fb-3cdd-496e-85aa-9e1b4133507f r/w with ordered data mode. Quota mode: none. Nov 4 04:57:54.529286 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 04:57:54.530811 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 04:57:54.533361 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 04:57:54.535554 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 04:57:54.541070 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Nov 4 04:57:54.546465 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 4 04:57:54.548102 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 04:57:54.550038 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 04:57:54.557207 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 04:57:54.568823 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (890) Nov 4 04:57:54.568901 kernel: BTRFS info (device vda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:57:54.572187 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 04:57:54.573154 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 04:57:54.589212 kernel: BTRFS info (device vda6): turning on async discard Nov 4 04:57:54.589293 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 04:57:54.593631 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 04:57:54.681559 systemd-networkd[691]: eth0: Gained IPv6LL Nov 4 04:57:54.687774 coreos-metadata[892]: Nov 04 04:57:54.687 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 4 04:57:54.698546 initrd-setup-root[921]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 04:57:54.702107 coreos-metadata[892]: Nov 04 04:57:54.702 INFO Fetch successful Nov 4 04:57:54.703562 coreos-metadata[893]: Nov 04 04:57:54.703 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 4 04:57:54.707260 initrd-setup-root[928]: cut: /sysroot/etc/group: No such file or directory Nov 4 04:57:54.711177 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Nov 4 04:57:54.711335 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Nov 4 04:57:54.714208 initrd-setup-root[936]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 04:57:54.716142 coreos-metadata[893]: Nov 04 04:57:54.716 INFO Fetch successful Nov 4 04:57:54.721121 initrd-setup-root[943]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 04:57:54.723021 coreos-metadata[893]: Nov 04 04:57:54.722 INFO wrote hostname ci-4508.0.0-n-4006da48af to /sysroot/etc/hostname Nov 4 04:57:54.724480 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 4 04:57:54.836741 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 04:57:54.839128 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 04:57:54.840594 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 04:57:54.863427 kernel: BTRFS info (device vda6): last unmount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:57:54.880789 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 04:57:54.906054 ignition[1013]: INFO : Ignition 2.22.0 Nov 4 04:57:54.906054 ignition[1013]: INFO : Stage: mount Nov 4 04:57:54.908557 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 04:57:54.908557 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 04:57:54.908557 ignition[1013]: INFO : mount: mount passed Nov 4 04:57:54.908557 ignition[1013]: INFO : Ignition finished successfully Nov 4 04:57:54.909775 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 04:57:54.913341 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 04:57:54.916303 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 04:57:54.941612 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 04:57:54.971432 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1023) Nov 4 04:57:54.973417 kernel: BTRFS info (device vda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:57:54.973477 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 04:57:54.979362 kernel: BTRFS info (device vda6): turning on async discard Nov 4 04:57:54.979461 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 04:57:54.982489 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 04:57:55.022449 ignition[1039]: INFO : Ignition 2.22.0 Nov 4 04:57:55.022449 ignition[1039]: INFO : Stage: files Nov 4 04:57:55.022449 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 04:57:55.022449 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 04:57:55.027218 ignition[1039]: DEBUG : files: compiled without relabeling support, skipping Nov 4 04:57:55.028312 ignition[1039]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 04:57:55.028312 ignition[1039]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 04:57:55.032693 ignition[1039]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 04:57:55.033823 ignition[1039]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 04:57:55.035065 unknown[1039]: wrote ssh authorized keys file for user: core Nov 4 04:57:55.036162 ignition[1039]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 04:57:55.037371 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 4 04:57:55.038461 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 4 04:57:55.074186 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 04:57:55.136466 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 4 04:57:55.136466 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 4 04:57:55.136466 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 04:57:55.136466 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 04:57:55.136466 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 04:57:55.136466 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 04:57:55.136466 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 04:57:55.136466 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 04:57:55.143768 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 04:57:55.143768 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 04:57:55.143768 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 04:57:55.143768 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 4 04:57:55.143768 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 4 04:57:55.143768 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 4 04:57:55.143768 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 4 04:57:55.574001 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 4 04:57:56.491666 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 4 04:57:56.491666 ignition[1039]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 4 04:57:56.494193 ignition[1039]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 04:57:56.495723 ignition[1039]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 04:57:56.495723 ignition[1039]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 4 04:57:56.495723 ignition[1039]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 4 04:57:56.498517 ignition[1039]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 04:57:56.498517 ignition[1039]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 04:57:56.498517 ignition[1039]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 04:57:56.498517 ignition[1039]: INFO : files: files passed Nov 4 04:57:56.498517 ignition[1039]: INFO : Ignition finished successfully Nov 4 04:57:56.498704 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 04:57:56.502602 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 04:57:56.505241 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 04:57:56.522568 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 04:57:56.522717 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 04:57:56.535651 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 04:57:56.535651 initrd-setup-root-after-ignition[1072]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 04:57:56.538790 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 04:57:56.539519 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 04:57:56.540946 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 04:57:56.543083 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 04:57:56.603243 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 04:57:56.603415 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 04:57:56.605045 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 04:57:56.605566 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 04:57:56.606764 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 04:57:56.607931 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 04:57:56.684018 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 04:57:56.696850 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 04:57:56.810108 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 04:57:56.810448 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 04:57:56.812971 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 04:57:56.814471 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 04:57:56.815536 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 04:57:56.815795 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 04:57:56.817190 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 04:57:56.817728 systemd[1]: Stopped target basic.target - Basic System. Nov 4 04:57:56.818903 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 04:57:56.819916 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 04:57:56.820871 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 04:57:56.822031 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 04:57:56.823264 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 04:57:56.824334 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 04:57:56.825513 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 04:57:56.826671 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 04:57:56.827736 systemd[1]: Stopped target swap.target - Swaps. Nov 4 04:57:56.828872 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 04:57:56.829035 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 04:57:56.830229 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 04:57:56.830967 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 04:57:56.831916 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 04:57:56.832263 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 04:57:56.833031 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 04:57:56.833196 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 04:57:56.834789 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 04:57:56.834985 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 04:57:56.836134 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 04:57:56.836273 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 04:57:56.837251 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 4 04:57:56.837375 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 4 04:57:56.840656 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 04:57:56.841639 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 04:57:56.841780 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 04:57:56.855676 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 04:57:56.856185 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 04:57:56.856342 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 04:57:56.857135 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 04:57:56.857316 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 04:57:56.861023 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 04:57:56.861211 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 04:57:56.872734 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 04:57:56.872855 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 04:57:56.893731 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 04:57:56.899249 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 04:57:56.903729 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 04:57:56.906463 ignition[1096]: INFO : Ignition 2.22.0 Nov 4 04:57:56.906463 ignition[1096]: INFO : Stage: umount Nov 4 04:57:56.906463 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 04:57:56.906463 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 04:57:56.906463 ignition[1096]: INFO : umount: umount passed Nov 4 04:57:56.906463 ignition[1096]: INFO : Ignition finished successfully Nov 4 04:57:56.907642 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 04:57:56.907822 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 04:57:56.909707 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 04:57:56.909827 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 04:57:56.910698 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 04:57:56.910765 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 04:57:56.911574 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 4 04:57:56.911631 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 4 04:57:56.912303 systemd[1]: Stopped target network.target - Network. Nov 4 04:57:56.913142 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 04:57:56.913197 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 04:57:56.913999 systemd[1]: Stopped target paths.target - Path Units. Nov 4 04:57:56.914903 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 04:57:56.918616 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 04:57:56.919290 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 04:57:56.920155 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 04:57:56.921140 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 04:57:56.921195 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 04:57:56.921967 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 04:57:56.922021 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 04:57:56.922846 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 04:57:56.922917 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 04:57:56.923652 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 04:57:56.923700 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 04:57:56.924459 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 04:57:56.924507 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 04:57:56.925435 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 04:57:56.926580 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 04:57:56.937899 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 04:57:56.938027 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 04:57:56.941057 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 04:57:56.941183 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 04:57:56.945018 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 04:57:56.945881 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 04:57:56.945929 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 04:57:56.947775 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 04:57:56.948912 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 04:57:56.948988 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 04:57:56.949563 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 04:57:56.949620 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 04:57:56.950087 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 04:57:56.950129 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 04:57:56.954662 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 04:57:56.967558 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 04:57:56.967806 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 04:57:56.968651 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 04:57:56.968696 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 04:57:56.969249 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 04:57:56.969296 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 04:57:56.969765 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 04:57:56.969822 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 04:57:56.970536 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 04:57:56.970608 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 04:57:56.971373 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 04:57:56.971503 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 04:57:56.973746 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 04:57:56.975718 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 04:57:56.975783 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 04:57:56.977745 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 04:57:56.977803 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 04:57:56.978370 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 04:57:56.978435 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:57:56.993124 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 04:57:56.998786 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 04:57:57.002043 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 04:57:57.002162 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 04:57:57.003490 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 04:57:57.004985 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 04:57:57.031512 systemd[1]: Switching root. Nov 4 04:57:57.060140 systemd-journald[298]: Journal stopped Nov 4 04:57:58.261647 systemd-journald[298]: Received SIGTERM from PID 1 (systemd). Nov 4 04:57:58.261727 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 04:57:58.261744 kernel: SELinux: policy capability open_perms=1 Nov 4 04:57:58.261757 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 04:57:58.261769 kernel: SELinux: policy capability always_check_network=0 Nov 4 04:57:58.261791 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 04:57:58.261810 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 04:57:58.261828 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 04:57:58.261843 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 04:57:58.261856 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 04:57:58.261868 kernel: audit: type=1403 audit(1762232277.236:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 04:57:58.261882 systemd[1]: Successfully loaded SELinux policy in 71.135ms. Nov 4 04:57:58.261904 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.296ms. Nov 4 04:57:58.261920 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 04:57:58.261938 systemd[1]: Detected virtualization kvm. Nov 4 04:57:58.261957 systemd[1]: Detected architecture x86-64. Nov 4 04:57:58.261970 systemd[1]: Detected first boot. Nov 4 04:57:58.261984 systemd[1]: Hostname set to . Nov 4 04:57:58.261999 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 04:57:58.262012 zram_generator::config[1141]: No configuration found. Nov 4 04:57:58.262032 kernel: Guest personality initialized and is inactive Nov 4 04:57:58.262045 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 4 04:57:58.262057 kernel: Initialized host personality Nov 4 04:57:58.262070 kernel: NET: Registered PF_VSOCK protocol family Nov 4 04:57:58.262083 systemd[1]: Populated /etc with preset unit settings. Nov 4 04:57:58.262096 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 04:57:58.262110 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 04:57:58.262123 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 04:57:58.262142 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 04:57:58.262155 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 04:57:58.262168 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 04:57:58.262181 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 04:57:58.262195 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 04:57:58.262209 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 04:57:58.262227 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 04:57:58.262244 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 04:57:58.262271 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 04:57:58.262290 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 04:57:58.262309 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 04:57:58.262328 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 04:57:58.262349 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 04:57:58.262363 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 04:57:58.262376 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 04:57:58.275103 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 04:57:58.275163 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 04:57:58.275178 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 04:57:58.275216 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 04:57:58.275229 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 04:57:58.275243 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 04:57:58.275256 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 04:57:58.275273 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 04:57:58.275288 systemd[1]: Reached target slices.target - Slice Units. Nov 4 04:57:58.275301 systemd[1]: Reached target swap.target - Swaps. Nov 4 04:57:58.275322 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 04:57:58.275351 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 04:57:58.275371 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 04:57:58.275406 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 04:57:58.275422 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 04:57:58.275436 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 04:57:58.275449 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 04:57:58.275463 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 04:57:58.275484 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 04:57:58.275499 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 04:57:58.275513 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:57:58.275527 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 04:57:58.275543 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 04:57:58.275565 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 04:57:58.275588 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 04:57:58.275616 systemd[1]: Reached target machines.target - Containers. Nov 4 04:57:58.275637 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 04:57:58.275658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:57:58.275673 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 04:57:58.275688 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 04:57:58.275702 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 04:57:58.275723 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 04:57:58.275736 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 04:57:58.275750 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 04:57:58.275763 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 04:57:58.275778 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 04:57:58.275792 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 04:57:58.275806 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 04:57:58.275826 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 04:57:58.275840 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 04:57:58.275855 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:57:58.275871 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 04:57:58.275885 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 04:57:58.275899 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 04:57:58.275913 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 04:57:58.275932 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 04:57:58.275946 kernel: fuse: init (API version 7.41) Nov 4 04:57:58.275961 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 04:57:58.275975 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:57:58.275995 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 04:57:58.276009 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 04:57:58.276022 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 04:57:58.276036 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 04:57:58.276050 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 04:57:58.276064 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 04:57:58.276079 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 04:57:58.276098 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 04:57:58.276113 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 04:57:58.276127 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 04:57:58.276146 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 04:57:58.276165 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 04:57:58.276179 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 04:57:58.276196 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 04:57:58.276210 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 04:57:58.276224 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 04:57:58.276238 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 04:57:58.276251 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 04:57:58.276270 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 04:57:58.276286 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 04:57:58.276307 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 04:57:58.276326 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 04:57:58.276347 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 04:57:58.276361 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 04:57:58.276374 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 04:57:58.276487 systemd-journald[1218]: Collecting audit messages is disabled. Nov 4 04:57:58.276515 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 04:57:58.276537 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:57:58.276552 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 04:57:58.276566 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 04:57:58.276581 systemd-journald[1218]: Journal started Nov 4 04:57:58.276612 systemd-journald[1218]: Runtime Journal (/run/log/journal/d47b931234f6435090a0b67482567152) is 4.8M, max 39.1M, 34.2M free. Nov 4 04:57:57.931829 systemd[1]: Queued start job for default target multi-user.target. Nov 4 04:57:57.941130 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 4 04:57:57.941770 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 04:57:58.283428 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 04:57:58.286424 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 04:57:58.286504 kernel: ACPI: bus type drm_connector registered Nov 4 04:57:58.292430 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 04:57:58.297415 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 04:57:58.301459 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 04:57:58.310621 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 04:57:58.311568 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 04:57:58.311751 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 04:57:58.326419 kernel: loop1: detected capacity change from 0 to 119080 Nov 4 04:57:58.336622 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 04:57:58.338833 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 04:57:58.345885 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 04:57:58.346575 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 04:57:58.350314 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 04:57:58.362101 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 04:57:58.378435 kernel: loop2: detected capacity change from 0 to 8 Nov 4 04:57:58.392246 systemd-journald[1218]: Time spent on flushing to /var/log/journal/d47b931234f6435090a0b67482567152 is 38.702ms for 995 entries. Nov 4 04:57:58.392246 systemd-journald[1218]: System Journal (/var/log/journal/d47b931234f6435090a0b67482567152) is 8M, max 163.5M, 155.5M free. Nov 4 04:57:58.434507 systemd-journald[1218]: Received client request to flush runtime journal. Nov 4 04:57:58.434561 kernel: loop3: detected capacity change from 0 to 224512 Nov 4 04:57:58.404636 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 04:57:58.415898 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 04:57:58.436167 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 04:57:58.439474 kernel: loop4: detected capacity change from 0 to 111544 Nov 4 04:57:58.455149 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 04:57:58.458677 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 04:57:58.461678 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 04:57:58.484587 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 04:57:58.488420 kernel: loop5: detected capacity change from 0 to 119080 Nov 4 04:57:58.505409 kernel: loop6: detected capacity change from 0 to 8 Nov 4 04:57:58.508416 kernel: loop7: detected capacity change from 0 to 224512 Nov 4 04:57:58.515552 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Nov 4 04:57:58.515572 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Nov 4 04:57:58.524941 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 04:57:58.528412 kernel: loop1: detected capacity change from 0 to 111544 Nov 4 04:57:58.541576 (sd-merge)[1284]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-digitalocean.raw'. Nov 4 04:57:58.551184 (sd-merge)[1284]: Merged extensions into '/usr'. Nov 4 04:57:58.563584 systemd[1]: Reload requested from client PID 1245 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 04:57:58.563608 systemd[1]: Reloading... Nov 4 04:57:58.720422 zram_generator::config[1319]: No configuration found. Nov 4 04:57:58.727386 systemd-resolved[1280]: Positive Trust Anchors: Nov 4 04:57:58.729322 systemd-resolved[1280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 04:57:58.729340 systemd-resolved[1280]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 04:57:58.729382 systemd-resolved[1280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 04:57:58.753699 systemd-resolved[1280]: Using system hostname 'ci-4508.0.0-n-4006da48af'. Nov 4 04:57:58.959627 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 04:57:58.960006 systemd[1]: Reloading finished in 395 ms. Nov 4 04:57:58.972042 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 04:57:58.972796 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 04:57:58.973571 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 04:57:58.976136 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 04:57:58.980524 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 04:57:58.985577 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 04:57:58.993153 systemd[1]: Starting ensure-sysext.service... Nov 4 04:57:58.998943 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 04:57:59.024169 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 04:57:59.026518 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 04:57:59.033238 systemd[1]: Reload requested from client PID 1363 ('systemctl') (unit ensure-sysext.service)... Nov 4 04:57:59.033260 systemd[1]: Reloading... Nov 4 04:57:59.050587 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 04:57:59.050648 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 04:57:59.051077 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 04:57:59.051452 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 04:57:59.053447 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 04:57:59.053762 systemd-tmpfiles[1364]: ACLs are not supported, ignoring. Nov 4 04:57:59.053819 systemd-tmpfiles[1364]: ACLs are not supported, ignoring. Nov 4 04:57:59.062865 systemd-tmpfiles[1364]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 04:57:59.062880 systemd-tmpfiles[1364]: Skipping /boot Nov 4 04:57:59.082054 systemd-tmpfiles[1364]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 04:57:59.082068 systemd-tmpfiles[1364]: Skipping /boot Nov 4 04:57:59.154437 zram_generator::config[1402]: No configuration found. Nov 4 04:57:59.345651 systemd[1]: Reloading finished in 311 ms. Nov 4 04:57:59.355807 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 04:57:59.369829 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 04:57:59.379851 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 04:57:59.383664 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 04:57:59.388712 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 04:57:59.392029 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 04:57:59.399986 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 04:57:59.404289 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 04:57:59.408586 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:57:59.408782 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:57:59.415531 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 04:57:59.421489 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 04:57:59.429171 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 04:57:59.430509 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:57:59.430652 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:57:59.430762 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:57:59.437088 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:57:59.437276 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:57:59.438551 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:57:59.438672 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:57:59.438761 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:57:59.444154 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:57:59.444403 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:57:59.449045 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 04:57:59.450159 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:57:59.450316 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:57:59.450476 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:57:59.471545 systemd[1]: Finished ensure-sysext.service. Nov 4 04:57:59.481300 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 4 04:57:59.495510 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 04:57:59.553830 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 04:57:59.554144 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 04:57:59.556881 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 04:57:59.557048 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 04:57:59.559914 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 04:57:59.564456 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 04:57:59.566718 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 04:57:59.566965 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 04:57:59.571851 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 04:57:59.571955 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 04:57:59.591227 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 04:57:59.616483 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 04:57:59.619856 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 04:57:59.630465 systemd-udevd[1445]: Using default interface naming scheme 'v257'. Nov 4 04:57:59.639137 augenrules[1481]: No rules Nov 4 04:57:59.640197 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 04:57:59.640854 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 04:57:59.694351 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 04:57:59.702269 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 04:57:59.723679 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 4 04:57:59.724629 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 04:57:59.980493 systemd-networkd[1488]: lo: Link UP Nov 4 04:57:59.980505 systemd-networkd[1488]: lo: Gained carrier Nov 4 04:57:59.980550 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Nov 4 04:57:59.998937 systemd-networkd[1488]: eth1: Configuring with /run/systemd/network/10-ba:d7:8c:d0:dd:27.network. Nov 4 04:57:59.999670 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 4 04:58:00.002910 systemd-networkd[1488]: eth1: Link UP Nov 4 04:58:00.002988 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:58:00.003203 systemd-networkd[1488]: eth1: Gained carrier Nov 4 04:58:00.003252 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:58:00.006317 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 04:58:00.014825 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Nov 4 04:58:00.016763 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 04:58:00.025851 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 04:58:00.026721 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:58:00.026781 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:58:00.026826 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 04:58:00.026851 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:58:00.027175 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 04:58:00.028246 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 04:58:00.028363 systemd[1]: Reached target network.target - Network. Nov 4 04:58:00.031875 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 04:58:00.037736 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 04:58:00.090273 kernel: ISO 9660 Extensions: RRIP_1991A Nov 4 04:58:00.085519 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 4 04:58:00.116015 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 04:58:00.121061 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 04:58:00.127976 kernel: mousedev: PS/2 mouse device common for all mice Nov 4 04:58:00.137066 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 04:58:00.138475 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 04:58:00.141159 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 04:58:00.142175 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 04:58:00.146369 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 04:58:00.149237 systemd-networkd[1488]: eth0: Configuring with /run/systemd/network/10-de:42:2e:fa:f3:55.network. Nov 4 04:58:00.150975 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Nov 4 04:58:00.151691 systemd-networkd[1488]: eth0: Link UP Nov 4 04:58:00.151919 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Nov 4 04:58:00.152819 systemd-networkd[1488]: eth0: Gained carrier Nov 4 04:58:00.159454 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Nov 4 04:58:00.159970 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Nov 4 04:58:00.210051 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 04:58:00.224438 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 4 04:58:00.221159 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 04:58:00.225229 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 04:58:00.226557 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 04:58:00.233527 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 4 04:58:00.234050 kernel: ACPI: button: Power Button [PWRF] Nov 4 04:58:00.236841 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 4 04:58:00.295043 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 04:58:00.362099 ldconfig[1443]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 04:58:00.366935 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 04:58:00.371663 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 04:58:00.397409 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 04:58:00.399539 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 04:58:00.400163 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 04:58:00.400974 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 04:58:00.401800 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 4 04:58:00.402539 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 04:58:00.403107 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 04:58:00.404586 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 04:58:00.405493 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 04:58:00.405530 systemd[1]: Reached target paths.target - Path Units. Nov 4 04:58:00.406074 systemd[1]: Reached target timers.target - Timer Units. Nov 4 04:58:00.408925 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 04:58:00.412050 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 04:58:00.423715 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 04:58:00.424501 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 04:58:00.424972 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 04:58:00.431207 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 04:58:00.432254 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 04:58:00.433469 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 04:58:00.434848 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 04:58:00.435281 systemd[1]: Reached target basic.target - Basic System. Nov 4 04:58:00.435774 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 04:58:00.435802 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 04:58:00.436928 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 04:58:00.440589 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 4 04:58:00.442372 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 04:58:00.444680 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 04:58:00.448650 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 04:58:00.464879 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 04:58:00.465457 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 04:58:00.474727 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 4 04:58:00.478508 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 04:58:00.484697 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 04:58:00.493260 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 04:58:00.496700 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 04:58:00.504073 jq[1556]: false Nov 4 04:58:00.513813 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 04:58:00.516526 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 04:58:00.517119 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 04:58:00.520481 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 04:58:00.522844 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Refreshing passwd entry cache Nov 4 04:58:00.522489 oslogin_cache_refresh[1558]: Refreshing passwd entry cache Nov 4 04:58:00.523483 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 04:58:00.527980 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 04:58:00.528777 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 04:58:00.529479 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 04:58:00.540192 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Failure getting users, quitting Nov 4 04:58:00.540192 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 04:58:00.540192 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Refreshing group entry cache Nov 4 04:58:00.539654 oslogin_cache_refresh[1558]: Failure getting users, quitting Nov 4 04:58:00.539677 oslogin_cache_refresh[1558]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 04:58:00.539732 oslogin_cache_refresh[1558]: Refreshing group entry cache Nov 4 04:58:00.546424 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Failure getting groups, quitting Nov 4 04:58:00.546424 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 04:58:00.544235 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 4 04:58:00.542251 oslogin_cache_refresh[1558]: Failure getting groups, quitting Nov 4 04:58:00.542270 oslogin_cache_refresh[1558]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 04:58:00.547024 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 4 04:58:00.569924 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 04:58:00.578941 jq[1570]: true Nov 4 04:58:00.619118 extend-filesystems[1557]: Found /dev/vda6 Nov 4 04:58:00.619551 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 04:58:00.622097 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 04:58:00.624796 coreos-metadata[1553]: Nov 04 04:58:00.623 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 4 04:58:00.622366 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 04:58:00.631420 extend-filesystems[1557]: Found /dev/vda9 Nov 4 04:58:00.643304 coreos-metadata[1553]: Nov 04 04:58:00.640 INFO Fetch successful Nov 4 04:58:00.649516 tar[1572]: linux-amd64/LICENSE Nov 4 04:58:00.649516 tar[1572]: linux-amd64/helm Nov 4 04:58:00.659598 extend-filesystems[1557]: Checking size of /dev/vda9 Nov 4 04:58:00.678443 jq[1590]: true Nov 4 04:58:00.697257 dbus-daemon[1554]: [system] SELinux support is enabled Nov 4 04:58:00.697650 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 04:58:00.702559 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 04:58:00.702595 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 04:58:00.703547 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 04:58:00.703653 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 4 04:58:00.703673 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 04:58:00.719850 extend-filesystems[1557]: Resized partition /dev/vda9 Nov 4 04:58:00.722687 extend-filesystems[1609]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 04:58:00.735926 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 14138363 blocks Nov 4 04:58:00.735990 update_engine[1569]: I20251104 04:58:00.733562 1569 main.cc:92] Flatcar Update Engine starting Nov 4 04:58:00.748892 systemd[1]: Started update-engine.service - Update Engine. Nov 4 04:58:00.749698 update_engine[1569]: I20251104 04:58:00.748986 1569 update_check_scheduler.cc:74] Next update check in 5m19s Nov 4 04:58:00.759914 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 04:58:00.880538 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 4 04:58:00.881646 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 04:58:00.936929 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Nov 4 04:58:00.948595 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 4 04:58:00.948696 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 4 04:58:00.984215 kernel: EDAC MC: Ver: 3.0.0 Nov 4 04:58:00.984265 kernel: Console: switching to colour dummy device 80x25 Nov 4 04:58:00.984282 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 4 04:58:00.984297 kernel: [drm] features: -context_init Nov 4 04:58:00.984312 kernel: [drm] number of scanouts: 1 Nov 4 04:58:00.984327 kernel: [drm] number of cap sets: 0 Nov 4 04:58:00.984341 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Nov 4 04:58:00.986198 extend-filesystems[1609]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 4 04:58:00.986198 extend-filesystems[1609]: old_desc_blocks = 1, new_desc_blocks = 7 Nov 4 04:58:00.986198 extend-filesystems[1609]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Nov 4 04:58:00.987146 extend-filesystems[1557]: Resized filesystem in /dev/vda9 Nov 4 04:58:00.989141 bash[1630]: Updated "/home/core/.ssh/authorized_keys" Nov 4 04:58:00.989755 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 04:58:00.990469 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 04:58:00.991135 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 04:58:00.997830 systemd[1]: Starting sshkeys.service... Nov 4 04:58:01.015438 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 4 04:58:01.020437 kernel: Console: switching to colour frame buffer device 128x48 Nov 4 04:58:01.031164 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:58:01.051501 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 4 04:58:01.116461 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 4 04:58:01.121291 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 4 04:58:01.165594 sshd_keygen[1587]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 04:58:01.217358 containerd[1595]: time="2025-11-04T04:58:01Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 04:58:01.232975 containerd[1595]: time="2025-11-04T04:58:01.227917132Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Nov 4 04:58:01.243096 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 04:58:01.244497 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:58:01.257541 containerd[1595]: time="2025-11-04T04:58:01.253090064Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.703µs" Nov 4 04:58:01.257541 containerd[1595]: time="2025-11-04T04:58:01.253152532Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 04:58:01.257541 containerd[1595]: time="2025-11-04T04:58:01.253220263Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 04:58:01.257541 containerd[1595]: time="2025-11-04T04:58:01.253245905Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 04:58:01.257541 containerd[1595]: time="2025-11-04T04:58:01.255547294Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 04:58:01.257541 containerd[1595]: time="2025-11-04T04:58:01.255749587Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 04:58:01.257541 containerd[1595]: time="2025-11-04T04:58:01.255871961Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 04:58:01.257541 containerd[1595]: time="2025-11-04T04:58:01.255892904Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 04:58:01.257541 containerd[1595]: time="2025-11-04T04:58:01.256150974Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 04:58:01.257541 containerd[1595]: time="2025-11-04T04:58:01.256179296Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 04:58:01.257541 containerd[1595]: time="2025-11-04T04:58:01.256203091Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 04:58:01.257541 containerd[1595]: time="2025-11-04T04:58:01.256219973Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 4 04:58:01.257893 containerd[1595]: time="2025-11-04T04:58:01.257503554Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 4 04:58:01.257893 containerd[1595]: time="2025-11-04T04:58:01.257525816Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 04:58:01.257893 containerd[1595]: time="2025-11-04T04:58:01.257628758Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 04:58:01.257893 containerd[1595]: time="2025-11-04T04:58:01.257861535Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 04:58:01.257986 containerd[1595]: time="2025-11-04T04:58:01.257898625Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 04:58:01.257986 containerd[1595]: time="2025-11-04T04:58:01.257915312Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 04:58:01.257986 containerd[1595]: time="2025-11-04T04:58:01.257949753Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 04:58:01.260314 containerd[1595]: time="2025-11-04T04:58:01.259877477Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 04:58:01.260314 containerd[1595]: time="2025-11-04T04:58:01.260015048Z" level=info msg="metadata content store policy set" policy=shared Nov 4 04:58:01.265609 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:58:01.270640 containerd[1595]: time="2025-11-04T04:58:01.269572636Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 04:58:01.270640 containerd[1595]: time="2025-11-04T04:58:01.269646430Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 4 04:58:01.270640 containerd[1595]: time="2025-11-04T04:58:01.269751261Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 4 04:58:01.270640 containerd[1595]: time="2025-11-04T04:58:01.269765600Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 04:58:01.270640 containerd[1595]: time="2025-11-04T04:58:01.269778879Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 04:58:01.270640 containerd[1595]: time="2025-11-04T04:58:01.269795209Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 04:58:01.270640 containerd[1595]: time="2025-11-04T04:58:01.269819729Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 04:58:01.270640 containerd[1595]: time="2025-11-04T04:58:01.269832655Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 04:58:01.270640 containerd[1595]: time="2025-11-04T04:58:01.269845256Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 04:58:01.270640 containerd[1595]: time="2025-11-04T04:58:01.269858324Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 04:58:01.270640 containerd[1595]: time="2025-11-04T04:58:01.269870527Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 04:58:01.270640 containerd[1595]: time="2025-11-04T04:58:01.269881716Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 04:58:01.270640 containerd[1595]: time="2025-11-04T04:58:01.269902112Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 04:58:01.270640 containerd[1595]: time="2025-11-04T04:58:01.269941787Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 04:58:01.271079 containerd[1595]: time="2025-11-04T04:58:01.270092722Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 04:58:01.271079 containerd[1595]: time="2025-11-04T04:58:01.270119141Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 04:58:01.271079 containerd[1595]: time="2025-11-04T04:58:01.271001405Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 04:58:01.271079 containerd[1595]: time="2025-11-04T04:58:01.271055055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 04:58:01.271079 containerd[1595]: time="2025-11-04T04:58:01.271076516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 04:58:01.271256 containerd[1595]: time="2025-11-04T04:58:01.271090660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 04:58:01.271256 containerd[1595]: time="2025-11-04T04:58:01.271105041Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 04:58:01.271256 containerd[1595]: time="2025-11-04T04:58:01.271118206Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 04:58:01.271256 containerd[1595]: time="2025-11-04T04:58:01.271129703Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 04:58:01.271256 containerd[1595]: time="2025-11-04T04:58:01.271142124Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 04:58:01.271256 containerd[1595]: time="2025-11-04T04:58:01.271153838Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 04:58:01.271256 containerd[1595]: time="2025-11-04T04:58:01.271184398Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 04:58:01.280437 containerd[1595]: time="2025-11-04T04:58:01.271237517Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 04:58:01.280437 containerd[1595]: time="2025-11-04T04:58:01.277833380Z" level=info msg="Start snapshots syncer" Nov 4 04:58:01.280437 containerd[1595]: time="2025-11-04T04:58:01.277894946Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278257721Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278445005Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278541616Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278715686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278739793Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278765892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278779796Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278793473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278806104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278817003Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278828519Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278839275Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278873187Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278888290Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278897070Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278905844Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278914330Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278925601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278935690Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278948667Z" level=info msg="runtime interface created" Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278953218Z" level=info msg="created NRI interface" Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278961187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278974327Z" level=info msg="Connect containerd service" Nov 4 04:58:01.280616 containerd[1595]: time="2025-11-04T04:58:01.278994114Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 04:58:01.285431 containerd[1595]: time="2025-11-04T04:58:01.282149128Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 04:58:01.300558 coreos-metadata[1642]: Nov 04 04:58:01.297 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 4 04:58:01.319503 coreos-metadata[1642]: Nov 04 04:58:01.317 INFO Fetch successful Nov 4 04:58:01.360887 unknown[1642]: wrote ssh authorized keys file for user: core Nov 4 04:58:01.380371 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 04:58:01.386512 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 04:58:01.399663 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 04:58:01.446517 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 04:58:01.448822 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:58:01.494266 update-ssh-keys[1672]: Updated "/home/core/.ssh/authorized_keys" Nov 4 04:58:01.497875 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 4 04:58:01.505308 systemd[1]: Finished sshkeys.service. Nov 4 04:58:01.513662 systemd-logind[1566]: New seat seat0. Nov 4 04:58:01.514935 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:58:01.518231 systemd-logind[1566]: Watching system buttons on /dev/input/event2 (Power Button) Nov 4 04:58:01.518271 systemd-logind[1566]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 4 04:58:01.521348 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 04:58:01.534835 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 04:58:01.535648 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 04:58:01.541434 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 04:58:01.591496 containerd[1595]: time="2025-11-04T04:58:01.590744373Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 04:58:01.591496 containerd[1595]: time="2025-11-04T04:58:01.590830002Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 04:58:01.591496 containerd[1595]: time="2025-11-04T04:58:01.590862305Z" level=info msg="Start subscribing containerd event" Nov 4 04:58:01.591496 containerd[1595]: time="2025-11-04T04:58:01.590914273Z" level=info msg="Start recovering state" Nov 4 04:58:01.591496 containerd[1595]: time="2025-11-04T04:58:01.591038755Z" level=info msg="Start event monitor" Nov 4 04:58:01.591496 containerd[1595]: time="2025-11-04T04:58:01.591104082Z" level=info msg="Start cni network conf syncer for default" Nov 4 04:58:01.591496 containerd[1595]: time="2025-11-04T04:58:01.591112811Z" level=info msg="Start streaming server" Nov 4 04:58:01.591496 containerd[1595]: time="2025-11-04T04:58:01.591120695Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 04:58:01.591496 containerd[1595]: time="2025-11-04T04:58:01.591130126Z" level=info msg="runtime interface starting up..." Nov 4 04:58:01.591496 containerd[1595]: time="2025-11-04T04:58:01.591138121Z" level=info msg="starting plugins..." Nov 4 04:58:01.591496 containerd[1595]: time="2025-11-04T04:58:01.591168176Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 04:58:01.591496 containerd[1595]: time="2025-11-04T04:58:01.591321817Z" level=info msg="containerd successfully booted in 0.374500s" Nov 4 04:58:01.591459 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 04:58:01.596211 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 04:58:01.603674 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 04:58:01.607472 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 04:58:01.610936 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 04:58:01.654610 systemd-networkd[1488]: eth0: Gained IPv6LL Nov 4 04:58:01.655476 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Nov 4 04:58:01.659828 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 04:58:01.663383 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 04:58:01.670758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:58:01.675115 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 04:58:01.706063 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:58:01.759594 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 04:58:01.899418 tar[1572]: linux-amd64/README.md Nov 4 04:58:01.911645 systemd-networkd[1488]: eth1: Gained IPv6LL Nov 4 04:58:01.912118 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Nov 4 04:58:01.924423 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 04:58:03.071194 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 04:58:03.073479 systemd[1]: Started sshd@0-164.92.104.185:22-147.75.109.163:45544.service - OpenSSH per-connection server daemon (147.75.109.163:45544). Nov 4 04:58:03.095669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:58:03.102644 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 04:58:03.103932 systemd[1]: Startup finished in 2.744s (kernel) + 5.700s (initrd) + 5.937s (userspace) = 14.382s. Nov 4 04:58:03.118903 (kubelet)[1719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:58:03.225494 sshd[1718]: Accepted publickey for core from 147.75.109.163 port 45544 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 04:58:03.228605 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:03.244308 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 04:58:03.247774 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 04:58:03.255491 systemd-logind[1566]: New session 1 of user core. Nov 4 04:58:03.278299 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 04:58:03.280622 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 04:58:03.299445 (systemd)[1731]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 04:58:03.305911 systemd-logind[1566]: New session c1 of user core. Nov 4 04:58:03.479972 systemd[1731]: Queued start job for default target default.target. Nov 4 04:58:03.487279 systemd[1731]: Created slice app.slice - User Application Slice. Nov 4 04:58:03.487529 systemd[1731]: Reached target paths.target - Paths. Nov 4 04:58:03.487644 systemd[1731]: Reached target timers.target - Timers. Nov 4 04:58:03.491604 systemd[1731]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 04:58:03.513372 systemd[1731]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 04:58:03.513850 systemd[1731]: Reached target sockets.target - Sockets. Nov 4 04:58:03.514015 systemd[1731]: Reached target basic.target - Basic System. Nov 4 04:58:03.514059 systemd[1731]: Reached target default.target - Main User Target. Nov 4 04:58:03.514097 systemd[1731]: Startup finished in 194ms. Nov 4 04:58:03.514641 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 04:58:03.520708 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 04:58:03.549964 systemd[1]: Started sshd@1-164.92.104.185:22-147.75.109.163:45560.service - OpenSSH per-connection server daemon (147.75.109.163:45560). Nov 4 04:58:03.644018 sshd[1744]: Accepted publickey for core from 147.75.109.163 port 45560 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 04:58:03.645616 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:03.651790 systemd-logind[1566]: New session 2 of user core. Nov 4 04:58:03.658694 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 04:58:03.685445 sshd[1748]: Connection closed by 147.75.109.163 port 45560 Nov 4 04:58:03.686834 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:03.697048 systemd[1]: sshd@1-164.92.104.185:22-147.75.109.163:45560.service: Deactivated successfully. Nov 4 04:58:03.700169 systemd[1]: session-2.scope: Deactivated successfully. Nov 4 04:58:03.702862 systemd-logind[1566]: Session 2 logged out. Waiting for processes to exit. Nov 4 04:58:03.706626 systemd[1]: Started sshd@2-164.92.104.185:22-147.75.109.163:45568.service - OpenSSH per-connection server daemon (147.75.109.163:45568). Nov 4 04:58:03.707312 systemd-logind[1566]: Removed session 2. Nov 4 04:58:03.794286 sshd[1754]: Accepted publickey for core from 147.75.109.163 port 45568 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 04:58:03.796171 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:03.806764 systemd-logind[1566]: New session 3 of user core. Nov 4 04:58:03.819772 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 04:58:03.844086 sshd[1757]: Connection closed by 147.75.109.163 port 45568 Nov 4 04:58:03.847538 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:03.861506 systemd[1]: sshd@2-164.92.104.185:22-147.75.109.163:45568.service: Deactivated successfully. Nov 4 04:58:03.866467 systemd[1]: session-3.scope: Deactivated successfully. Nov 4 04:58:03.871691 systemd-logind[1566]: Session 3 logged out. Waiting for processes to exit. Nov 4 04:58:03.876891 systemd[1]: Started sshd@3-164.92.104.185:22-147.75.109.163:45578.service - OpenSSH per-connection server daemon (147.75.109.163:45578). Nov 4 04:58:03.881634 systemd-logind[1566]: Removed session 3. Nov 4 04:58:03.924562 kubelet[1719]: E1104 04:58:03.924511 1719 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:58:03.929060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:58:03.929295 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:58:03.930126 systemd[1]: kubelet.service: Consumed 1.431s CPU time, 264.9M memory peak. Nov 4 04:58:03.975935 sshd[1763]: Accepted publickey for core from 147.75.109.163 port 45578 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 04:58:03.978293 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:03.987547 systemd-logind[1566]: New session 4 of user core. Nov 4 04:58:03.998826 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 04:58:04.023500 sshd[1769]: Connection closed by 147.75.109.163 port 45578 Nov 4 04:58:04.024287 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:04.044635 systemd[1]: sshd@3-164.92.104.185:22-147.75.109.163:45578.service: Deactivated successfully. Nov 4 04:58:04.048530 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 04:58:04.051642 systemd-logind[1566]: Session 4 logged out. Waiting for processes to exit. Nov 4 04:58:04.054367 systemd[1]: Started sshd@4-164.92.104.185:22-147.75.109.163:45582.service - OpenSSH per-connection server daemon (147.75.109.163:45582). Nov 4 04:58:04.056970 systemd-logind[1566]: Removed session 4. Nov 4 04:58:04.125773 sshd[1775]: Accepted publickey for core from 147.75.109.163 port 45582 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 04:58:04.127343 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:04.133072 systemd-logind[1566]: New session 5 of user core. Nov 4 04:58:04.147695 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 04:58:04.177132 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 04:58:04.177439 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:58:04.190761 sudo[1779]: pam_unix(sudo:session): session closed for user root Nov 4 04:58:04.194912 sshd[1778]: Connection closed by 147.75.109.163 port 45582 Nov 4 04:58:04.195317 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:04.205725 systemd[1]: sshd@4-164.92.104.185:22-147.75.109.163:45582.service: Deactivated successfully. Nov 4 04:58:04.208473 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 04:58:04.210654 systemd-logind[1566]: Session 5 logged out. Waiting for processes to exit. Nov 4 04:58:04.212907 systemd[1]: Started sshd@5-164.92.104.185:22-147.75.109.163:45588.service - OpenSSH per-connection server daemon (147.75.109.163:45588). Nov 4 04:58:04.216753 systemd-logind[1566]: Removed session 5. Nov 4 04:58:04.295650 sshd[1785]: Accepted publickey for core from 147.75.109.163 port 45588 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 04:58:04.297720 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:04.304952 systemd-logind[1566]: New session 6 of user core. Nov 4 04:58:04.311693 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 04:58:04.331550 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 04:58:04.331853 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:58:04.337336 sudo[1790]: pam_unix(sudo:session): session closed for user root Nov 4 04:58:04.345752 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 04:58:04.346539 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:58:04.360803 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 04:58:04.411750 augenrules[1812]: No rules Nov 4 04:58:04.413620 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 04:58:04.413946 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 04:58:04.415144 sudo[1789]: pam_unix(sudo:session): session closed for user root Nov 4 04:58:04.418582 sshd[1788]: Connection closed by 147.75.109.163 port 45588 Nov 4 04:58:04.419093 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:04.436078 systemd[1]: sshd@5-164.92.104.185:22-147.75.109.163:45588.service: Deactivated successfully. Nov 4 04:58:04.438116 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 04:58:04.439290 systemd-logind[1566]: Session 6 logged out. Waiting for processes to exit. Nov 4 04:58:04.443245 systemd[1]: Started sshd@6-164.92.104.185:22-147.75.109.163:45596.service - OpenSSH per-connection server daemon (147.75.109.163:45596). Nov 4 04:58:04.444541 systemd-logind[1566]: Removed session 6. Nov 4 04:58:04.520351 sshd[1821]: Accepted publickey for core from 147.75.109.163 port 45596 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 04:58:04.522265 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:04.528848 systemd-logind[1566]: New session 7 of user core. Nov 4 04:58:04.536694 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 04:58:04.556295 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 04:58:04.557161 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:58:05.109120 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 04:58:05.139015 (dockerd)[1843]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 04:58:05.550127 dockerd[1843]: time="2025-11-04T04:58:05.549991427Z" level=info msg="Starting up" Nov 4 04:58:05.553431 dockerd[1843]: time="2025-11-04T04:58:05.553024454Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 04:58:05.574201 dockerd[1843]: time="2025-11-04T04:58:05.574049822Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 04:58:05.590953 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1388688949-merged.mount: Deactivated successfully. Nov 4 04:58:05.686001 systemd[1]: var-lib-docker-metacopy\x2dcheck473642408-merged.mount: Deactivated successfully. Nov 4 04:58:05.702447 dockerd[1843]: time="2025-11-04T04:58:05.702267402Z" level=info msg="Loading containers: start." Nov 4 04:58:05.712972 kernel: Initializing XFRM netlink socket Nov 4 04:58:05.941344 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Nov 4 04:58:05.944154 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Nov 4 04:58:05.954194 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Nov 4 04:58:06.002481 systemd-networkd[1488]: docker0: Link UP Nov 4 04:58:06.003449 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Nov 4 04:58:06.006618 dockerd[1843]: time="2025-11-04T04:58:06.006485239Z" level=info msg="Loading containers: done." Nov 4 04:58:06.028143 dockerd[1843]: time="2025-11-04T04:58:06.027768623Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 04:58:06.028143 dockerd[1843]: time="2025-11-04T04:58:06.027878693Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 04:58:06.028143 dockerd[1843]: time="2025-11-04T04:58:06.027974229Z" level=info msg="Initializing buildkit" Nov 4 04:58:06.056815 dockerd[1843]: time="2025-11-04T04:58:06.056761735Z" level=info msg="Completed buildkit initialization" Nov 4 04:58:06.067383 dockerd[1843]: time="2025-11-04T04:58:06.067314253Z" level=info msg="Daemon has completed initialization" Nov 4 04:58:06.067814 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 04:58:06.068686 dockerd[1843]: time="2025-11-04T04:58:06.067891179Z" level=info msg="API listen on /run/docker.sock" Nov 4 04:58:06.587200 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3105108646-merged.mount: Deactivated successfully. Nov 4 04:58:07.020222 containerd[1595]: time="2025-11-04T04:58:07.020096785Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 4 04:58:07.661627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1573401854.mount: Deactivated successfully. Nov 4 04:58:08.687692 containerd[1595]: time="2025-11-04T04:58:08.687623489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:08.688694 containerd[1595]: time="2025-11-04T04:58:08.688654416Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=27191533" Nov 4 04:58:08.689640 containerd[1595]: time="2025-11-04T04:58:08.689173086Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:08.691712 containerd[1595]: time="2025-11-04T04:58:08.691678573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:08.693109 containerd[1595]: time="2025-11-04T04:58:08.693067552Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.672919901s" Nov 4 04:58:08.693260 containerd[1595]: time="2025-11-04T04:58:08.693242546Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 4 04:58:08.694045 containerd[1595]: time="2025-11-04T04:58:08.694013893Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 4 04:58:10.233729 containerd[1595]: time="2025-11-04T04:58:10.233655549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:10.234997 containerd[1595]: time="2025-11-04T04:58:10.234704148Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=0" Nov 4 04:58:10.235685 containerd[1595]: time="2025-11-04T04:58:10.235653492Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:10.239421 containerd[1595]: time="2025-11-04T04:58:10.239354839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:10.240716 containerd[1595]: time="2025-11-04T04:58:10.240675195Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.546630072s" Nov 4 04:58:10.240716 containerd[1595]: time="2025-11-04T04:58:10.240711277Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 4 04:58:10.241478 containerd[1595]: time="2025-11-04T04:58:10.241334849Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 4 04:58:11.586420 containerd[1595]: time="2025-11-04T04:58:11.586226503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:11.588172 containerd[1595]: time="2025-11-04T04:58:11.588115662Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=0" Nov 4 04:58:11.589416 containerd[1595]: time="2025-11-04T04:58:11.589051301Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:11.592096 containerd[1595]: time="2025-11-04T04:58:11.592053222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:11.592821 containerd[1595]: time="2025-11-04T04:58:11.592718152Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.351349329s" Nov 4 04:58:11.592920 containerd[1595]: time="2025-11-04T04:58:11.592907098Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 4 04:58:11.593928 containerd[1595]: time="2025-11-04T04:58:11.593890171Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 4 04:58:12.803095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3944248864.mount: Deactivated successfully. Nov 4 04:58:13.417269 containerd[1595]: time="2025-11-04T04:58:13.417211534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:13.419000 containerd[1595]: time="2025-11-04T04:58:13.418958383Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=0" Nov 4 04:58:13.419587 containerd[1595]: time="2025-11-04T04:58:13.419541936Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:13.422837 containerd[1595]: time="2025-11-04T04:58:13.422769756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:13.423419 containerd[1595]: time="2025-11-04T04:58:13.423378690Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.829349752s" Nov 4 04:58:13.423628 containerd[1595]: time="2025-11-04T04:58:13.423534509Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 4 04:58:13.424107 containerd[1595]: time="2025-11-04T04:58:13.424086451Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 4 04:58:13.425334 systemd-resolved[1280]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 4 04:58:13.948691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2800361540.mount: Deactivated successfully. Nov 4 04:58:13.952269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 04:58:13.954351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:58:14.181099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:58:14.198864 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:58:14.281074 kubelet[2159]: E1104 04:58:14.280927 2159 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:58:14.286698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:58:14.286896 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:58:14.287474 systemd[1]: kubelet.service: Consumed 222ms CPU time, 111.4M memory peak. Nov 4 04:58:14.892461 containerd[1595]: time="2025-11-04T04:58:14.892412351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:14.894087 containerd[1595]: time="2025-11-04T04:58:14.894054940Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17569900" Nov 4 04:58:14.895037 containerd[1595]: time="2025-11-04T04:58:14.895009136Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:14.897424 containerd[1595]: time="2025-11-04T04:58:14.897370851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:14.898732 containerd[1595]: time="2025-11-04T04:58:14.898691063Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.474573932s" Nov 4 04:58:14.899038 containerd[1595]: time="2025-11-04T04:58:14.898734395Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 4 04:58:14.899207 containerd[1595]: time="2025-11-04T04:58:14.899188052Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 4 04:58:15.427086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1596625395.mount: Deactivated successfully. Nov 4 04:58:15.431539 containerd[1595]: time="2025-11-04T04:58:15.431488633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 04:58:15.433334 containerd[1595]: time="2025-11-04T04:58:15.433273506Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 04:58:15.433763 containerd[1595]: time="2025-11-04T04:58:15.433721488Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 04:58:15.436346 containerd[1595]: time="2025-11-04T04:58:15.436270729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 04:58:15.437287 containerd[1595]: time="2025-11-04T04:58:15.436909342Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 537.689727ms" Nov 4 04:58:15.437287 containerd[1595]: time="2025-11-04T04:58:15.436952358Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 4 04:58:15.437461 containerd[1595]: time="2025-11-04T04:58:15.437440516Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 4 04:58:16.093709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2355466260.mount: Deactivated successfully. Nov 4 04:58:16.502596 systemd-resolved[1280]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 4 04:58:19.533071 containerd[1595]: time="2025-11-04T04:58:19.533012088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:19.534200 containerd[1595]: time="2025-11-04T04:58:19.534103299Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45502580" Nov 4 04:58:19.534717 containerd[1595]: time="2025-11-04T04:58:19.534690333Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:19.537234 containerd[1595]: time="2025-11-04T04:58:19.537178588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:19.538744 containerd[1595]: time="2025-11-04T04:58:19.538424683Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.100953215s" Nov 4 04:58:19.538744 containerd[1595]: time="2025-11-04T04:58:19.538470266Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 4 04:58:22.337095 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:58:22.337309 systemd[1]: kubelet.service: Consumed 222ms CPU time, 111.4M memory peak. Nov 4 04:58:22.339804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:58:22.373515 systemd[1]: Reload requested from client PID 2291 ('systemctl') (unit session-7.scope)... Nov 4 04:58:22.373709 systemd[1]: Reloading... Nov 4 04:58:22.504422 zram_generator::config[2331]: No configuration found. Nov 4 04:58:22.796861 systemd[1]: Reloading finished in 422 ms. Nov 4 04:58:22.850109 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 04:58:22.850200 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 04:58:22.850636 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:58:22.850697 systemd[1]: kubelet.service: Consumed 116ms CPU time, 97.8M memory peak. Nov 4 04:58:22.852497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:58:23.020964 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:58:23.031898 (kubelet)[2389]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 04:58:23.082180 kubelet[2389]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:58:23.082180 kubelet[2389]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 04:58:23.082180 kubelet[2389]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:58:23.082180 kubelet[2389]: I1104 04:58:23.081629 2389 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 04:58:23.749307 kubelet[2389]: I1104 04:58:23.749227 2389 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 4 04:58:23.749307 kubelet[2389]: I1104 04:58:23.749290 2389 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 04:58:23.749864 kubelet[2389]: I1104 04:58:23.749830 2389 server.go:954] "Client rotation is on, will bootstrap in background" Nov 4 04:58:23.791911 kubelet[2389]: I1104 04:58:23.791825 2389 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 04:58:23.795410 kubelet[2389]: E1104 04:58:23.794217 2389 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://164.92.104.185:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 164.92.104.185:6443: connect: connection refused" logger="UnhandledError" Nov 4 04:58:23.804514 kubelet[2389]: I1104 04:58:23.804484 2389 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 04:58:23.810827 kubelet[2389]: I1104 04:58:23.810778 2389 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 04:58:23.813119 kubelet[2389]: I1104 04:58:23.813048 2389 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 04:58:23.813419 kubelet[2389]: I1104 04:58:23.813123 2389 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4508.0.0-n-4006da48af","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 04:58:23.815558 kubelet[2389]: I1104 04:58:23.815488 2389 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 04:58:23.815558 kubelet[2389]: I1104 04:58:23.815538 2389 container_manager_linux.go:304] "Creating device plugin manager" Nov 4 04:58:23.817178 kubelet[2389]: I1104 04:58:23.817130 2389 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:58:23.821747 kubelet[2389]: I1104 04:58:23.821679 2389 kubelet.go:446] "Attempting to sync node with API server" Nov 4 04:58:23.821747 kubelet[2389]: I1104 04:58:23.821746 2389 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 04:58:23.821986 kubelet[2389]: I1104 04:58:23.821789 2389 kubelet.go:352] "Adding apiserver pod source" Nov 4 04:58:23.821986 kubelet[2389]: I1104 04:58:23.821809 2389 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 04:58:23.828530 kubelet[2389]: W1104 04:58:23.827882 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://164.92.104.185:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 164.92.104.185:6443: connect: connection refused Nov 4 04:58:23.828530 kubelet[2389]: E1104 04:58:23.827965 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://164.92.104.185:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.92.104.185:6443: connect: connection refused" logger="UnhandledError" Nov 4 04:58:23.828530 kubelet[2389]: W1104 04:58:23.828088 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://164.92.104.185:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4508.0.0-n-4006da48af&limit=500&resourceVersion=0": dial tcp 164.92.104.185:6443: connect: connection refused Nov 4 04:58:23.828530 kubelet[2389]: E1104 04:58:23.828140 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://164.92.104.185:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4508.0.0-n-4006da48af&limit=500&resourceVersion=0\": dial tcp 164.92.104.185:6443: connect: connection refused" logger="UnhandledError" Nov 4 04:58:23.830260 kubelet[2389]: I1104 04:58:23.830202 2389 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 4 04:58:23.835430 kubelet[2389]: I1104 04:58:23.834022 2389 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 4 04:58:23.835430 kubelet[2389]: W1104 04:58:23.834134 2389 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 04:58:23.835586 kubelet[2389]: I1104 04:58:23.835462 2389 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 04:58:23.835586 kubelet[2389]: I1104 04:58:23.835507 2389 server.go:1287] "Started kubelet" Nov 4 04:58:23.837853 kubelet[2389]: I1104 04:58:23.837801 2389 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 04:58:23.839299 kubelet[2389]: I1104 04:58:23.839268 2389 server.go:479] "Adding debug handlers to kubelet server" Nov 4 04:58:23.842683 kubelet[2389]: I1104 04:58:23.842120 2389 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 04:58:23.842683 kubelet[2389]: I1104 04:58:23.842543 2389 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 04:58:23.845357 kubelet[2389]: I1104 04:58:23.845323 2389 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 04:58:23.846794 kubelet[2389]: E1104 04:58:23.845217 2389 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://164.92.104.185:6443/api/v1/namespaces/default/events\": dial tcp 164.92.104.185:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4508.0.0-n-4006da48af.1874b4f191d50241 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4508.0.0-n-4006da48af,UID:ci-4508.0.0-n-4006da48af,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4508.0.0-n-4006da48af,},FirstTimestamp:2025-11-04 04:58:23.835480641 +0000 UTC m=+0.799510572,LastTimestamp:2025-11-04 04:58:23.835480641 +0000 UTC m=+0.799510572,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4508.0.0-n-4006da48af,}" Nov 4 04:58:23.852315 kubelet[2389]: I1104 04:58:23.852272 2389 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 04:58:23.856012 kubelet[2389]: E1104 04:58:23.855979 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4508.0.0-n-4006da48af\" not found" Nov 4 04:58:23.856735 kubelet[2389]: I1104 04:58:23.856638 2389 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 04:58:23.865306 kubelet[2389]: E1104 04:58:23.865261 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.104.185:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4508.0.0-n-4006da48af?timeout=10s\": dial tcp 164.92.104.185:6443: connect: connection refused" interval="200ms" Nov 4 04:58:23.866606 kubelet[2389]: W1104 04:58:23.866556 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://164.92.104.185:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.104.185:6443: connect: connection refused Nov 4 04:58:23.866814 kubelet[2389]: E1104 04:58:23.866790 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://164.92.104.185:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.92.104.185:6443: connect: connection refused" logger="UnhandledError" Nov 4 04:58:23.867117 kubelet[2389]: I1104 04:58:23.867062 2389 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 04:58:23.867186 kubelet[2389]: I1104 04:58:23.867166 2389 reconciler.go:26] "Reconciler: start to sync state" Nov 4 04:58:23.868700 kubelet[2389]: I1104 04:58:23.868646 2389 factory.go:221] Registration of the systemd container factory successfully Nov 4 04:58:23.868790 kubelet[2389]: I1104 04:58:23.868766 2389 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 04:58:23.869833 kubelet[2389]: E1104 04:58:23.869763 2389 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 04:58:23.873701 kubelet[2389]: I1104 04:58:23.873624 2389 factory.go:221] Registration of the containerd container factory successfully Nov 4 04:58:23.891263 kubelet[2389]: I1104 04:58:23.891235 2389 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 04:58:23.892962 kubelet[2389]: I1104 04:58:23.892790 2389 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 04:58:23.892962 kubelet[2389]: I1104 04:58:23.892823 2389 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:58:23.897833 kubelet[2389]: I1104 04:58:23.897777 2389 policy_none.go:49] "None policy: Start" Nov 4 04:58:23.898229 kubelet[2389]: I1104 04:58:23.898073 2389 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 04:58:23.898229 kubelet[2389]: I1104 04:58:23.898110 2389 state_mem.go:35] "Initializing new in-memory state store" Nov 4 04:58:23.905791 kubelet[2389]: I1104 04:58:23.905746 2389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 4 04:58:23.907629 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 04:58:23.910159 kubelet[2389]: I1104 04:58:23.910058 2389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 4 04:58:23.910159 kubelet[2389]: I1104 04:58:23.910095 2389 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 4 04:58:23.910159 kubelet[2389]: I1104 04:58:23.910117 2389 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 04:58:23.910159 kubelet[2389]: I1104 04:58:23.910125 2389 kubelet.go:2382] "Starting kubelet main sync loop" Nov 4 04:58:23.910645 kubelet[2389]: E1104 04:58:23.910180 2389 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 04:58:23.911114 kubelet[2389]: W1104 04:58:23.910764 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://164.92.104.185:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.104.185:6443: connect: connection refused Nov 4 04:58:23.911114 kubelet[2389]: E1104 04:58:23.910820 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://164.92.104.185:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.92.104.185:6443: connect: connection refused" logger="UnhandledError" Nov 4 04:58:23.921932 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 04:58:23.927919 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 04:58:23.939146 kubelet[2389]: I1104 04:58:23.939098 2389 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 4 04:58:23.939516 kubelet[2389]: I1104 04:58:23.939331 2389 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 04:58:23.939516 kubelet[2389]: I1104 04:58:23.939344 2389 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 04:58:23.939957 kubelet[2389]: I1104 04:58:23.939929 2389 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 04:58:23.942415 kubelet[2389]: E1104 04:58:23.942375 2389 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 04:58:23.942779 kubelet[2389]: E1104 04:58:23.942758 2389 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4508.0.0-n-4006da48af\" not found" Nov 4 04:58:24.023289 systemd[1]: Created slice kubepods-burstable-pod2267432b7d0e7cd28e1820ca408e1648.slice - libcontainer container kubepods-burstable-pod2267432b7d0e7cd28e1820ca408e1648.slice. Nov 4 04:58:24.034013 kubelet[2389]: E1104 04:58:24.033467 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4508.0.0-n-4006da48af\" not found" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.036416 systemd[1]: Created slice kubepods-burstable-podc6a690f32abbc286ce4cfd35d4234c5e.slice - libcontainer container kubepods-burstable-podc6a690f32abbc286ce4cfd35d4234c5e.slice. Nov 4 04:58:24.039693 kubelet[2389]: E1104 04:58:24.039661 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4508.0.0-n-4006da48af\" not found" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.041335 kubelet[2389]: I1104 04:58:24.041287 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.041864 kubelet[2389]: E1104 04:58:24.041818 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.92.104.185:6443/api/v1/nodes\": dial tcp 164.92.104.185:6443: connect: connection refused" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.045587 systemd[1]: Created slice kubepods-burstable-podb63e0872129ef69e376998a2bcd67628.slice - libcontainer container kubepods-burstable-podb63e0872129ef69e376998a2bcd67628.slice. Nov 4 04:58:24.049869 kubelet[2389]: E1104 04:58:24.049827 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4508.0.0-n-4006da48af\" not found" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.068661 kubelet[2389]: E1104 04:58:24.068611 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.104.185:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4508.0.0-n-4006da48af?timeout=10s\": dial tcp 164.92.104.185:6443: connect: connection refused" interval="400ms" Nov 4 04:58:24.168630 kubelet[2389]: I1104 04:58:24.168559 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6a690f32abbc286ce4cfd35d4234c5e-k8s-certs\") pod \"kube-controller-manager-ci-4508.0.0-n-4006da48af\" (UID: \"c6a690f32abbc286ce4cfd35d4234c5e\") " pod="kube-system/kube-controller-manager-ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.168630 kubelet[2389]: I1104 04:58:24.168619 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2267432b7d0e7cd28e1820ca408e1648-ca-certs\") pod \"kube-apiserver-ci-4508.0.0-n-4006da48af\" (UID: \"2267432b7d0e7cd28e1820ca408e1648\") " pod="kube-system/kube-apiserver-ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.168630 kubelet[2389]: I1104 04:58:24.168646 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2267432b7d0e7cd28e1820ca408e1648-k8s-certs\") pod \"kube-apiserver-ci-4508.0.0-n-4006da48af\" (UID: \"2267432b7d0e7cd28e1820ca408e1648\") " pod="kube-system/kube-apiserver-ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.169186 kubelet[2389]: I1104 04:58:24.168665 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2267432b7d0e7cd28e1820ca408e1648-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4508.0.0-n-4006da48af\" (UID: \"2267432b7d0e7cd28e1820ca408e1648\") " pod="kube-system/kube-apiserver-ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.169186 kubelet[2389]: I1104 04:58:24.168692 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6a690f32abbc286ce4cfd35d4234c5e-ca-certs\") pod \"kube-controller-manager-ci-4508.0.0-n-4006da48af\" (UID: \"c6a690f32abbc286ce4cfd35d4234c5e\") " pod="kube-system/kube-controller-manager-ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.169186 kubelet[2389]: I1104 04:58:24.168716 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6a690f32abbc286ce4cfd35d4234c5e-flexvolume-dir\") pod \"kube-controller-manager-ci-4508.0.0-n-4006da48af\" (UID: \"c6a690f32abbc286ce4cfd35d4234c5e\") " pod="kube-system/kube-controller-manager-ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.169186 kubelet[2389]: I1104 04:58:24.168734 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6a690f32abbc286ce4cfd35d4234c5e-kubeconfig\") pod \"kube-controller-manager-ci-4508.0.0-n-4006da48af\" (UID: \"c6a690f32abbc286ce4cfd35d4234c5e\") " pod="kube-system/kube-controller-manager-ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.169186 kubelet[2389]: I1104 04:58:24.168749 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6a690f32abbc286ce4cfd35d4234c5e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4508.0.0-n-4006da48af\" (UID: \"c6a690f32abbc286ce4cfd35d4234c5e\") " pod="kube-system/kube-controller-manager-ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.169312 kubelet[2389]: I1104 04:58:24.168764 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b63e0872129ef69e376998a2bcd67628-kubeconfig\") pod \"kube-scheduler-ci-4508.0.0-n-4006da48af\" (UID: \"b63e0872129ef69e376998a2bcd67628\") " pod="kube-system/kube-scheduler-ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.243765 kubelet[2389]: I1104 04:58:24.243703 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.244189 kubelet[2389]: E1104 04:58:24.244146 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.92.104.185:6443/api/v1/nodes\": dial tcp 164.92.104.185:6443: connect: connection refused" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.335161 kubelet[2389]: E1104 04:58:24.334996 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:24.337165 containerd[1595]: time="2025-11-04T04:58:24.337105517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4508.0.0-n-4006da48af,Uid:2267432b7d0e7cd28e1820ca408e1648,Namespace:kube-system,Attempt:0,}" Nov 4 04:58:24.342896 kubelet[2389]: E1104 04:58:24.342591 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:24.343111 containerd[1595]: time="2025-11-04T04:58:24.343069730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4508.0.0-n-4006da48af,Uid:c6a690f32abbc286ce4cfd35d4234c5e,Namespace:kube-system,Attempt:0,}" Nov 4 04:58:24.351315 kubelet[2389]: E1104 04:58:24.351260 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:24.352023 containerd[1595]: time="2025-11-04T04:58:24.351979237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4508.0.0-n-4006da48af,Uid:b63e0872129ef69e376998a2bcd67628,Namespace:kube-system,Attempt:0,}" Nov 4 04:58:24.444481 containerd[1595]: time="2025-11-04T04:58:24.444426772Z" level=info msg="connecting to shim 76daebfe1cd0e8d262761909d6fb1f847b57a0d3f506d93cfaa1e7362480a943" address="unix:///run/containerd/s/65bde6bbeb077f40369cf031f9bb4f5389787dddb3f94dd3be0d0d8c72969da3" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:58:24.451887 containerd[1595]: time="2025-11-04T04:58:24.451838593Z" level=info msg="connecting to shim 8a411dc0edb10c42665c53c3931aec62116bdc6235dad4b95b2404f3dc95d675" address="unix:///run/containerd/s/a699984c1a4042783a96890ce6dec54bab9e467fe2b2b7423c491e93d8baf5e2" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:58:24.454617 containerd[1595]: time="2025-11-04T04:58:24.454533655Z" level=info msg="connecting to shim 56a7aa3b6221bc85f4d67b3755d17582b4b929d672da329569eb3f24839375e1" address="unix:///run/containerd/s/7610f1f98492b1d6201a4de41158df369b33ccb92a5be53a767e8d74eb54b468" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:58:24.470232 kubelet[2389]: E1104 04:58:24.470164 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.104.185:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4508.0.0-n-4006da48af?timeout=10s\": dial tcp 164.92.104.185:6443: connect: connection refused" interval="800ms" Nov 4 04:58:24.560632 systemd[1]: Started cri-containerd-76daebfe1cd0e8d262761909d6fb1f847b57a0d3f506d93cfaa1e7362480a943.scope - libcontainer container 76daebfe1cd0e8d262761909d6fb1f847b57a0d3f506d93cfaa1e7362480a943. Nov 4 04:58:24.563204 systemd[1]: Started cri-containerd-8a411dc0edb10c42665c53c3931aec62116bdc6235dad4b95b2404f3dc95d675.scope - libcontainer container 8a411dc0edb10c42665c53c3931aec62116bdc6235dad4b95b2404f3dc95d675. Nov 4 04:58:24.570314 systemd[1]: Started cri-containerd-56a7aa3b6221bc85f4d67b3755d17582b4b929d672da329569eb3f24839375e1.scope - libcontainer container 56a7aa3b6221bc85f4d67b3755d17582b4b929d672da329569eb3f24839375e1. Nov 4 04:58:24.647862 kubelet[2389]: I1104 04:58:24.647595 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.652141 kubelet[2389]: E1104 04:58:24.652027 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.92.104.185:6443/api/v1/nodes\": dial tcp 164.92.104.185:6443: connect: connection refused" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.676079 containerd[1595]: time="2025-11-04T04:58:24.675712658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4508.0.0-n-4006da48af,Uid:b63e0872129ef69e376998a2bcd67628,Namespace:kube-system,Attempt:0,} returns sandbox id \"76daebfe1cd0e8d262761909d6fb1f847b57a0d3f506d93cfaa1e7362480a943\"" Nov 4 04:58:24.677030 containerd[1595]: time="2025-11-04T04:58:24.676983322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4508.0.0-n-4006da48af,Uid:c6a690f32abbc286ce4cfd35d4234c5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a411dc0edb10c42665c53c3931aec62116bdc6235dad4b95b2404f3dc95d675\"" Nov 4 04:58:24.679881 kubelet[2389]: E1104 04:58:24.679837 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:24.680149 kubelet[2389]: E1104 04:58:24.680074 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:24.682203 containerd[1595]: time="2025-11-04T04:58:24.682165849Z" level=info msg="CreateContainer within sandbox \"8a411dc0edb10c42665c53c3931aec62116bdc6235dad4b95b2404f3dc95d675\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 04:58:24.682541 containerd[1595]: time="2025-11-04T04:58:24.682512362Z" level=info msg="CreateContainer within sandbox \"76daebfe1cd0e8d262761909d6fb1f847b57a0d3f506d93cfaa1e7362480a943\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 04:58:24.693367 containerd[1595]: time="2025-11-04T04:58:24.693311114Z" level=info msg="Container 270365cd672522269b84152a666020c752b4c87710927d9d3af25b1b74523c97: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:58:24.695226 containerd[1595]: time="2025-11-04T04:58:24.694625576Z" level=info msg="Container e256dbe213c7cc8cdb469731f2aeea1eac1e2aed9485984d425bf0a499125556: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:58:24.703211 containerd[1595]: time="2025-11-04T04:58:24.703164624Z" level=info msg="CreateContainer within sandbox \"8a411dc0edb10c42665c53c3931aec62116bdc6235dad4b95b2404f3dc95d675\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e256dbe213c7cc8cdb469731f2aeea1eac1e2aed9485984d425bf0a499125556\"" Nov 4 04:58:24.704136 containerd[1595]: time="2025-11-04T04:58:24.704089582Z" level=info msg="CreateContainer within sandbox \"76daebfe1cd0e8d262761909d6fb1f847b57a0d3f506d93cfaa1e7362480a943\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"270365cd672522269b84152a666020c752b4c87710927d9d3af25b1b74523c97\"" Nov 4 04:58:24.704548 containerd[1595]: time="2025-11-04T04:58:24.704297062Z" level=info msg="StartContainer for \"e256dbe213c7cc8cdb469731f2aeea1eac1e2aed9485984d425bf0a499125556\"" Nov 4 04:58:24.705186 containerd[1595]: time="2025-11-04T04:58:24.704912451Z" level=info msg="StartContainer for \"270365cd672522269b84152a666020c752b4c87710927d9d3af25b1b74523c97\"" Nov 4 04:58:24.707182 containerd[1595]: time="2025-11-04T04:58:24.707067128Z" level=info msg="connecting to shim e256dbe213c7cc8cdb469731f2aeea1eac1e2aed9485984d425bf0a499125556" address="unix:///run/containerd/s/a699984c1a4042783a96890ce6dec54bab9e467fe2b2b7423c491e93d8baf5e2" protocol=ttrpc version=3 Nov 4 04:58:24.708869 containerd[1595]: time="2025-11-04T04:58:24.708752965Z" level=info msg="connecting to shim 270365cd672522269b84152a666020c752b4c87710927d9d3af25b1b74523c97" address="unix:///run/containerd/s/65bde6bbeb077f40369cf031f9bb4f5389787dddb3f94dd3be0d0d8c72969da3" protocol=ttrpc version=3 Nov 4 04:58:24.715201 containerd[1595]: time="2025-11-04T04:58:24.715060132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4508.0.0-n-4006da48af,Uid:2267432b7d0e7cd28e1820ca408e1648,Namespace:kube-system,Attempt:0,} returns sandbox id \"56a7aa3b6221bc85f4d67b3755d17582b4b929d672da329569eb3f24839375e1\"" Nov 4 04:58:24.716512 kubelet[2389]: E1104 04:58:24.716316 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:24.727020 containerd[1595]: time="2025-11-04T04:58:24.726978031Z" level=info msg="CreateContainer within sandbox \"56a7aa3b6221bc85f4d67b3755d17582b4b929d672da329569eb3f24839375e1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 04:58:24.737612 systemd[1]: Started cri-containerd-270365cd672522269b84152a666020c752b4c87710927d9d3af25b1b74523c97.scope - libcontainer container 270365cd672522269b84152a666020c752b4c87710927d9d3af25b1b74523c97. Nov 4 04:58:24.746169 containerd[1595]: time="2025-11-04T04:58:24.745271490Z" level=info msg="Container 52feacb2102dc35ad3fd9537016ae958dddb10da5d151295fd4c2f4f340728af: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:58:24.749641 systemd[1]: Started cri-containerd-e256dbe213c7cc8cdb469731f2aeea1eac1e2aed9485984d425bf0a499125556.scope - libcontainer container e256dbe213c7cc8cdb469731f2aeea1eac1e2aed9485984d425bf0a499125556. Nov 4 04:58:24.760148 containerd[1595]: time="2025-11-04T04:58:24.760032646Z" level=info msg="CreateContainer within sandbox \"56a7aa3b6221bc85f4d67b3755d17582b4b929d672da329569eb3f24839375e1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"52feacb2102dc35ad3fd9537016ae958dddb10da5d151295fd4c2f4f340728af\"" Nov 4 04:58:24.761431 containerd[1595]: time="2025-11-04T04:58:24.760727911Z" level=info msg="StartContainer for \"52feacb2102dc35ad3fd9537016ae958dddb10da5d151295fd4c2f4f340728af\"" Nov 4 04:58:24.762314 containerd[1595]: time="2025-11-04T04:58:24.762288037Z" level=info msg="connecting to shim 52feacb2102dc35ad3fd9537016ae958dddb10da5d151295fd4c2f4f340728af" address="unix:///run/containerd/s/7610f1f98492b1d6201a4de41158df369b33ccb92a5be53a767e8d74eb54b468" protocol=ttrpc version=3 Nov 4 04:58:24.802641 systemd[1]: Started cri-containerd-52feacb2102dc35ad3fd9537016ae958dddb10da5d151295fd4c2f4f340728af.scope - libcontainer container 52feacb2102dc35ad3fd9537016ae958dddb10da5d151295fd4c2f4f340728af. Nov 4 04:58:24.857679 containerd[1595]: time="2025-11-04T04:58:24.857554689Z" level=info msg="StartContainer for \"e256dbe213c7cc8cdb469731f2aeea1eac1e2aed9485984d425bf0a499125556\" returns successfully" Nov 4 04:58:24.862423 containerd[1595]: time="2025-11-04T04:58:24.861405686Z" level=info msg="StartContainer for \"270365cd672522269b84152a666020c752b4c87710927d9d3af25b1b74523c97\" returns successfully" Nov 4 04:58:24.897497 containerd[1595]: time="2025-11-04T04:58:24.897435352Z" level=info msg="StartContainer for \"52feacb2102dc35ad3fd9537016ae958dddb10da5d151295fd4c2f4f340728af\" returns successfully" Nov 4 04:58:24.925460 kubelet[2389]: E1104 04:58:24.925146 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4508.0.0-n-4006da48af\" not found" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.925640 kubelet[2389]: E1104 04:58:24.925596 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:24.929876 kubelet[2389]: E1104 04:58:24.929827 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4508.0.0-n-4006da48af\" not found" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.930660 kubelet[2389]: E1104 04:58:24.930218 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:24.933380 kubelet[2389]: E1104 04:58:24.933349 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4508.0.0-n-4006da48af\" not found" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:24.933679 kubelet[2389]: E1104 04:58:24.933661 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:25.015090 kubelet[2389]: W1104 04:58:25.013871 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://164.92.104.185:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4508.0.0-n-4006da48af&limit=500&resourceVersion=0": dial tcp 164.92.104.185:6443: connect: connection refused Nov 4 04:58:25.015090 kubelet[2389]: E1104 04:58:25.013971 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://164.92.104.185:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4508.0.0-n-4006da48af&limit=500&resourceVersion=0\": dial tcp 164.92.104.185:6443: connect: connection refused" logger="UnhandledError" Nov 4 04:58:25.162489 kubelet[2389]: W1104 04:58:25.162371 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://164.92.104.185:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 164.92.104.185:6443: connect: connection refused Nov 4 04:58:25.162489 kubelet[2389]: E1104 04:58:25.162494 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://164.92.104.185:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.92.104.185:6443: connect: connection refused" logger="UnhandledError" Nov 4 04:58:25.455220 kubelet[2389]: I1104 04:58:25.455175 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:25.939093 kubelet[2389]: E1104 04:58:25.938615 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4508.0.0-n-4006da48af\" not found" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:25.939093 kubelet[2389]: E1104 04:58:25.938789 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:25.939819 kubelet[2389]: E1104 04:58:25.939794 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4508.0.0-n-4006da48af\" not found" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:25.939981 kubelet[2389]: E1104 04:58:25.939965 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:26.919501 kubelet[2389]: E1104 04:58:26.919450 2389 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4508.0.0-n-4006da48af\" not found" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:26.940366 kubelet[2389]: E1104 04:58:26.940334 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4508.0.0-n-4006da48af\" not found" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:26.940559 kubelet[2389]: E1104 04:58:26.940519 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:26.954601 kubelet[2389]: I1104 04:58:26.954144 2389 kubelet_node_status.go:78] "Successfully registered node" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:26.964529 kubelet[2389]: I1104 04:58:26.964477 2389 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4508.0.0-n-4006da48af" Nov 4 04:58:26.990963 kubelet[2389]: E1104 04:58:26.990918 2389 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4508.0.0-n-4006da48af\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4508.0.0-n-4006da48af" Nov 4 04:58:26.990963 kubelet[2389]: I1104 04:58:26.990953 2389 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4508.0.0-n-4006da48af" Nov 4 04:58:26.994706 kubelet[2389]: E1104 04:58:26.994660 2389 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4508.0.0-n-4006da48af\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4508.0.0-n-4006da48af" Nov 4 04:58:26.994706 kubelet[2389]: I1104 04:58:26.994703 2389 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4508.0.0-n-4006da48af" Nov 4 04:58:26.996692 kubelet[2389]: E1104 04:58:26.996644 2389 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4508.0.0-n-4006da48af\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4508.0.0-n-4006da48af" Nov 4 04:58:27.827410 kubelet[2389]: I1104 04:58:27.827331 2389 apiserver.go:52] "Watching apiserver" Nov 4 04:58:27.867274 kubelet[2389]: I1104 04:58:27.867209 2389 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 04:58:29.003623 systemd[1]: Reload requested from client PID 2653 ('systemctl') (unit session-7.scope)... Nov 4 04:58:29.004012 systemd[1]: Reloading... Nov 4 04:58:29.111448 zram_generator::config[2693]: No configuration found. Nov 4 04:58:29.433316 systemd[1]: Reloading finished in 428 ms. Nov 4 04:58:29.475266 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:58:29.487983 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 04:58:29.488321 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:58:29.488421 systemd[1]: kubelet.service: Consumed 1.241s CPU time, 128.7M memory peak. Nov 4 04:58:29.492304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:58:29.684420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:58:29.698130 (kubelet)[2748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 04:58:29.798325 kubelet[2748]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:58:29.798325 kubelet[2748]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 04:58:29.798325 kubelet[2748]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:58:29.798875 kubelet[2748]: I1104 04:58:29.798434 2748 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 04:58:29.807185 kubelet[2748]: I1104 04:58:29.807136 2748 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 4 04:58:29.807185 kubelet[2748]: I1104 04:58:29.807175 2748 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 04:58:29.808563 kubelet[2748]: I1104 04:58:29.808534 2748 server.go:954] "Client rotation is on, will bootstrap in background" Nov 4 04:58:29.815234 kubelet[2748]: I1104 04:58:29.815200 2748 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 4 04:58:29.818695 kubelet[2748]: I1104 04:58:29.818650 2748 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 04:58:29.824674 kubelet[2748]: I1104 04:58:29.824620 2748 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 04:58:29.829236 kubelet[2748]: I1104 04:58:29.829173 2748 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 04:58:29.829777 kubelet[2748]: I1104 04:58:29.829729 2748 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 04:58:29.830225 kubelet[2748]: I1104 04:58:29.829856 2748 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4508.0.0-n-4006da48af","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 04:58:29.830411 kubelet[2748]: I1104 04:58:29.830397 2748 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 04:58:29.830567 kubelet[2748]: I1104 04:58:29.830476 2748 container_manager_linux.go:304] "Creating device plugin manager" Nov 4 04:58:29.830567 kubelet[2748]: I1104 04:58:29.830532 2748 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:58:29.830789 kubelet[2748]: I1104 04:58:29.830777 2748 kubelet.go:446] "Attempting to sync node with API server" Nov 4 04:58:29.830888 kubelet[2748]: I1104 04:58:29.830877 2748 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 04:58:29.830977 kubelet[2748]: I1104 04:58:29.830968 2748 kubelet.go:352] "Adding apiserver pod source" Nov 4 04:58:29.831080 kubelet[2748]: I1104 04:58:29.831067 2748 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 04:58:29.832442 kubelet[2748]: I1104 04:58:29.832319 2748 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 4 04:58:29.832918 kubelet[2748]: I1104 04:58:29.832895 2748 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 4 04:58:29.833726 kubelet[2748]: I1104 04:58:29.833517 2748 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 04:58:29.833726 kubelet[2748]: I1104 04:58:29.833557 2748 server.go:1287] "Started kubelet" Nov 4 04:58:29.837028 kubelet[2748]: I1104 04:58:29.836597 2748 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 04:58:29.842461 kubelet[2748]: I1104 04:58:29.841078 2748 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 04:58:29.844343 kubelet[2748]: I1104 04:58:29.844317 2748 server.go:479] "Adding debug handlers to kubelet server" Nov 4 04:58:29.849206 kubelet[2748]: I1104 04:58:29.849134 2748 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 04:58:29.849626 kubelet[2748]: I1104 04:58:29.849604 2748 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 04:58:29.850084 kubelet[2748]: I1104 04:58:29.850062 2748 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 04:58:29.854677 kubelet[2748]: I1104 04:58:29.854487 2748 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 04:58:29.856640 kubelet[2748]: E1104 04:58:29.856512 2748 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4508.0.0-n-4006da48af\" not found" Nov 4 04:58:29.865017 kubelet[2748]: I1104 04:58:29.864972 2748 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 04:58:29.865345 kubelet[2748]: I1104 04:58:29.865331 2748 reconciler.go:26] "Reconciler: start to sync state" Nov 4 04:58:29.881143 kubelet[2748]: I1104 04:58:29.881089 2748 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 4 04:58:29.889220 kubelet[2748]: I1104 04:58:29.889185 2748 factory.go:221] Registration of the systemd container factory successfully Nov 4 04:58:29.890423 kubelet[2748]: I1104 04:58:29.889570 2748 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 04:58:29.897311 kubelet[2748]: I1104 04:58:29.897218 2748 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 4 04:58:29.897311 kubelet[2748]: I1104 04:58:29.897302 2748 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 4 04:58:29.899640 kubelet[2748]: I1104 04:58:29.897334 2748 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 04:58:29.899640 kubelet[2748]: I1104 04:58:29.897348 2748 kubelet.go:2382] "Starting kubelet main sync loop" Nov 4 04:58:29.899640 kubelet[2748]: E1104 04:58:29.897475 2748 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 04:58:29.903152 kubelet[2748]: I1104 04:58:29.903114 2748 factory.go:221] Registration of the containerd container factory successfully Nov 4 04:58:29.918565 kubelet[2748]: E1104 04:58:29.918383 2748 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 04:58:29.983554 kubelet[2748]: I1104 04:58:29.983375 2748 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 04:58:29.983554 kubelet[2748]: I1104 04:58:29.983519 2748 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 04:58:29.983708 kubelet[2748]: I1104 04:58:29.983560 2748 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:58:29.983755 kubelet[2748]: I1104 04:58:29.983736 2748 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 04:58:29.983785 kubelet[2748]: I1104 04:58:29.983751 2748 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 04:58:29.983785 kubelet[2748]: I1104 04:58:29.983770 2748 policy_none.go:49] "None policy: Start" Nov 4 04:58:29.983785 kubelet[2748]: I1104 04:58:29.983781 2748 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 04:58:29.983875 kubelet[2748]: I1104 04:58:29.983790 2748 state_mem.go:35] "Initializing new in-memory state store" Nov 4 04:58:29.983922 kubelet[2748]: I1104 04:58:29.983890 2748 state_mem.go:75] "Updated machine memory state" Nov 4 04:58:29.992672 kubelet[2748]: I1104 04:58:29.992620 2748 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 4 04:58:29.996447 kubelet[2748]: I1104 04:58:29.996347 2748 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 04:58:29.996627 kubelet[2748]: I1104 04:58:29.996379 2748 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 04:58:29.999777 kubelet[2748]: I1104 04:58:29.999210 2748 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 04:58:30.000011 kubelet[2748]: I1104 04:58:29.999953 2748 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4508.0.0-n-4006da48af" Nov 4 04:58:30.002635 kubelet[2748]: I1104 04:58:29.999751 2748 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4508.0.0-n-4006da48af" Nov 4 04:58:30.004750 kubelet[2748]: I1104 04:58:30.001373 2748 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4508.0.0-n-4006da48af" Nov 4 04:58:30.015418 kubelet[2748]: E1104 04:58:30.013868 2748 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 04:58:30.026114 kubelet[2748]: W1104 04:58:30.026068 2748 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 4 04:58:30.037873 kubelet[2748]: W1104 04:58:30.037831 2748 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 4 04:58:30.038066 kubelet[2748]: W1104 04:58:30.036548 2748 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 4 04:58:30.067637 kubelet[2748]: I1104 04:58:30.067584 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2267432b7d0e7cd28e1820ca408e1648-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4508.0.0-n-4006da48af\" (UID: \"2267432b7d0e7cd28e1820ca408e1648\") " pod="kube-system/kube-apiserver-ci-4508.0.0-n-4006da48af" Nov 4 04:58:30.068140 kubelet[2748]: I1104 04:58:30.067885 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6a690f32abbc286ce4cfd35d4234c5e-k8s-certs\") pod \"kube-controller-manager-ci-4508.0.0-n-4006da48af\" (UID: \"c6a690f32abbc286ce4cfd35d4234c5e\") " pod="kube-system/kube-controller-manager-ci-4508.0.0-n-4006da48af" Nov 4 04:58:30.068140 kubelet[2748]: I1104 04:58:30.067926 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6a690f32abbc286ce4cfd35d4234c5e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4508.0.0-n-4006da48af\" (UID: \"c6a690f32abbc286ce4cfd35d4234c5e\") " pod="kube-system/kube-controller-manager-ci-4508.0.0-n-4006da48af" Nov 4 04:58:30.068140 kubelet[2748]: I1104 04:58:30.067957 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b63e0872129ef69e376998a2bcd67628-kubeconfig\") pod \"kube-scheduler-ci-4508.0.0-n-4006da48af\" (UID: \"b63e0872129ef69e376998a2bcd67628\") " pod="kube-system/kube-scheduler-ci-4508.0.0-n-4006da48af" Nov 4 04:58:30.068140 kubelet[2748]: I1104 04:58:30.067979 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2267432b7d0e7cd28e1820ca408e1648-k8s-certs\") pod \"kube-apiserver-ci-4508.0.0-n-4006da48af\" (UID: \"2267432b7d0e7cd28e1820ca408e1648\") " pod="kube-system/kube-apiserver-ci-4508.0.0-n-4006da48af" Nov 4 04:58:30.068140 kubelet[2748]: I1104 04:58:30.068000 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6a690f32abbc286ce4cfd35d4234c5e-ca-certs\") pod \"kube-controller-manager-ci-4508.0.0-n-4006da48af\" (UID: \"c6a690f32abbc286ce4cfd35d4234c5e\") " pod="kube-system/kube-controller-manager-ci-4508.0.0-n-4006da48af" Nov 4 04:58:30.068368 kubelet[2748]: I1104 04:58:30.068047 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6a690f32abbc286ce4cfd35d4234c5e-flexvolume-dir\") pod \"kube-controller-manager-ci-4508.0.0-n-4006da48af\" (UID: \"c6a690f32abbc286ce4cfd35d4234c5e\") " pod="kube-system/kube-controller-manager-ci-4508.0.0-n-4006da48af" Nov 4 04:58:30.068368 kubelet[2748]: I1104 04:58:30.068073 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6a690f32abbc286ce4cfd35d4234c5e-kubeconfig\") pod \"kube-controller-manager-ci-4508.0.0-n-4006da48af\" (UID: \"c6a690f32abbc286ce4cfd35d4234c5e\") " pod="kube-system/kube-controller-manager-ci-4508.0.0-n-4006da48af" Nov 4 04:58:30.068368 kubelet[2748]: I1104 04:58:30.068094 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2267432b7d0e7cd28e1820ca408e1648-ca-certs\") pod \"kube-apiserver-ci-4508.0.0-n-4006da48af\" (UID: \"2267432b7d0e7cd28e1820ca408e1648\") " pod="kube-system/kube-apiserver-ci-4508.0.0-n-4006da48af" Nov 4 04:58:30.112365 kubelet[2748]: I1104 04:58:30.112324 2748 kubelet_node_status.go:75] "Attempting to register node" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:30.130908 kubelet[2748]: I1104 04:58:30.130672 2748 kubelet_node_status.go:124] "Node was previously registered" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:30.131249 kubelet[2748]: I1104 04:58:30.131221 2748 kubelet_node_status.go:78] "Successfully registered node" node="ci-4508.0.0-n-4006da48af" Nov 4 04:58:30.327938 kubelet[2748]: E1104 04:58:30.327501 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:30.339452 kubelet[2748]: E1104 04:58:30.338875 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:30.340837 kubelet[2748]: E1104 04:58:30.340764 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:30.850829 kubelet[2748]: I1104 04:58:30.850551 2748 apiserver.go:52] "Watching apiserver" Nov 4 04:58:30.867546 kubelet[2748]: I1104 04:58:30.867494 2748 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 04:58:30.899066 kubelet[2748]: I1104 04:58:30.897939 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4508.0.0-n-4006da48af" podStartSLOduration=0.897921753 podStartE2EDuration="897.921753ms" podCreationTimestamp="2025-11-04 04:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:58:30.897316684 +0000 UTC m=+1.190303960" watchObservedRunningTime="2025-11-04 04:58:30.897921753 +0000 UTC m=+1.190909028" Nov 4 04:58:30.921269 kubelet[2748]: I1104 04:58:30.921209 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4508.0.0-n-4006da48af" podStartSLOduration=0.921189773 podStartE2EDuration="921.189773ms" podCreationTimestamp="2025-11-04 04:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:58:30.907934571 +0000 UTC m=+1.200921854" watchObservedRunningTime="2025-11-04 04:58:30.921189773 +0000 UTC m=+1.214177048" Nov 4 04:58:30.947910 kubelet[2748]: I1104 04:58:30.947842 2748 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4508.0.0-n-4006da48af" Nov 4 04:58:30.948728 kubelet[2748]: I1104 04:58:30.948651 2748 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4508.0.0-n-4006da48af" Nov 4 04:58:30.951582 kubelet[2748]: E1104 04:58:30.951481 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:30.970520 kubelet[2748]: W1104 04:58:30.970485 2748 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 4 04:58:30.970683 kubelet[2748]: E1104 04:58:30.970644 2748 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4508.0.0-n-4006da48af\" already exists" pod="kube-system/kube-scheduler-ci-4508.0.0-n-4006da48af" Nov 4 04:58:30.970849 kubelet[2748]: E1104 04:58:30.970829 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:30.973455 kubelet[2748]: W1104 04:58:30.973425 2748 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 4 04:58:30.973622 kubelet[2748]: E1104 04:58:30.973485 2748 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4508.0.0-n-4006da48af\" already exists" pod="kube-system/kube-apiserver-ci-4508.0.0-n-4006da48af" Nov 4 04:58:30.974432 kubelet[2748]: E1104 04:58:30.973661 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:30.974432 kubelet[2748]: I1104 04:58:30.973946 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4508.0.0-n-4006da48af" podStartSLOduration=0.973933098 podStartE2EDuration="973.933098ms" podCreationTimestamp="2025-11-04 04:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:58:30.921610863 +0000 UTC m=+1.214598148" watchObservedRunningTime="2025-11-04 04:58:30.973933098 +0000 UTC m=+1.266920381" Nov 4 04:58:31.950202 kubelet[2748]: E1104 04:58:31.950145 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:31.950633 kubelet[2748]: E1104 04:58:31.950221 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:33.862995 kubelet[2748]: I1104 04:58:33.862954 2748 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 04:58:33.864066 containerd[1595]: time="2025-11-04T04:58:33.863746371Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 04:58:33.865015 kubelet[2748]: I1104 04:58:33.864303 2748 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 04:58:34.855661 systemd[1]: Created slice kubepods-besteffort-podb2a6a818_c667_4bf1_8f87_96916417bc59.slice - libcontainer container kubepods-besteffort-podb2a6a818_c667_4bf1_8f87_96916417bc59.slice. Nov 4 04:58:34.899667 kubelet[2748]: I1104 04:58:34.899610 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2a6a818-c667-4bf1-8f87-96916417bc59-lib-modules\") pod \"kube-proxy-t7k4l\" (UID: \"b2a6a818-c667-4bf1-8f87-96916417bc59\") " pod="kube-system/kube-proxy-t7k4l" Nov 4 04:58:34.900297 kubelet[2748]: I1104 04:58:34.899767 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2a6a818-c667-4bf1-8f87-96916417bc59-xtables-lock\") pod \"kube-proxy-t7k4l\" (UID: \"b2a6a818-c667-4bf1-8f87-96916417bc59\") " pod="kube-system/kube-proxy-t7k4l" Nov 4 04:58:34.900297 kubelet[2748]: I1104 04:58:34.899790 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlnd7\" (UniqueName: \"kubernetes.io/projected/b2a6a818-c667-4bf1-8f87-96916417bc59-kube-api-access-xlnd7\") pod \"kube-proxy-t7k4l\" (UID: \"b2a6a818-c667-4bf1-8f87-96916417bc59\") " pod="kube-system/kube-proxy-t7k4l" Nov 4 04:58:34.900576 kubelet[2748]: I1104 04:58:34.900512 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b2a6a818-c667-4bf1-8f87-96916417bc59-kube-proxy\") pod \"kube-proxy-t7k4l\" (UID: \"b2a6a818-c667-4bf1-8f87-96916417bc59\") " pod="kube-system/kube-proxy-t7k4l" Nov 4 04:58:34.980571 systemd[1]: Created slice kubepods-besteffort-podb8163e61_1e38_4b0d_9d2a_9178d23be890.slice - libcontainer container kubepods-besteffort-podb8163e61_1e38_4b0d_9d2a_9178d23be890.slice. Nov 4 04:58:35.001831 kubelet[2748]: I1104 04:58:35.001672 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b8163e61-1e38-4b0d-9d2a-9178d23be890-var-lib-calico\") pod \"tigera-operator-7dcd859c48-hmnc2\" (UID: \"b8163e61-1e38-4b0d-9d2a-9178d23be890\") " pod="tigera-operator/tigera-operator-7dcd859c48-hmnc2" Nov 4 04:58:35.001831 kubelet[2748]: I1104 04:58:35.001802 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzg45\" (UniqueName: \"kubernetes.io/projected/b8163e61-1e38-4b0d-9d2a-9178d23be890-kube-api-access-rzg45\") pod \"tigera-operator-7dcd859c48-hmnc2\" (UID: \"b8163e61-1e38-4b0d-9d2a-9178d23be890\") " pod="tigera-operator/tigera-operator-7dcd859c48-hmnc2" Nov 4 04:58:35.165462 kubelet[2748]: E1104 04:58:35.165312 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:35.167187 containerd[1595]: time="2025-11-04T04:58:35.167067939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t7k4l,Uid:b2a6a818-c667-4bf1-8f87-96916417bc59,Namespace:kube-system,Attempt:0,}" Nov 4 04:58:35.214324 containerd[1595]: time="2025-11-04T04:58:35.214272758Z" level=info msg="connecting to shim b96b9a10032116d1c42c4e06be6ed9eb7d4e9ec8ba24e363aa5c532035a2d745" address="unix:///run/containerd/s/256aca193df591508f5b9a2dc0466fcd8601156725a23b91f5244bf4b38d75c2" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:58:35.254770 systemd[1]: Started cri-containerd-b96b9a10032116d1c42c4e06be6ed9eb7d4e9ec8ba24e363aa5c532035a2d745.scope - libcontainer container b96b9a10032116d1c42c4e06be6ed9eb7d4e9ec8ba24e363aa5c532035a2d745. Nov 4 04:58:35.286320 containerd[1595]: time="2025-11-04T04:58:35.286281953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hmnc2,Uid:b8163e61-1e38-4b0d-9d2a-9178d23be890,Namespace:tigera-operator,Attempt:0,}" Nov 4 04:58:35.301692 containerd[1595]: time="2025-11-04T04:58:35.301618617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t7k4l,Uid:b2a6a818-c667-4bf1-8f87-96916417bc59,Namespace:kube-system,Attempt:0,} returns sandbox id \"b96b9a10032116d1c42c4e06be6ed9eb7d4e9ec8ba24e363aa5c532035a2d745\"" Nov 4 04:58:35.303562 kubelet[2748]: E1104 04:58:35.303526 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:35.316220 containerd[1595]: time="2025-11-04T04:58:35.316086694Z" level=info msg="CreateContainer within sandbox \"b96b9a10032116d1c42c4e06be6ed9eb7d4e9ec8ba24e363aa5c532035a2d745\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 04:58:35.332038 containerd[1595]: time="2025-11-04T04:58:35.331974887Z" level=info msg="connecting to shim ee3bace4dc78262fb33b040275a68263d1546f482c55e1d72ecb0429ee03f04e" address="unix:///run/containerd/s/e591c490304336c12ccaf7493ac56fa6b933cbdc2e5c34590e11747e4735d48b" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:58:35.334633 containerd[1595]: time="2025-11-04T04:58:35.334568946Z" level=info msg="Container 69596439f3a1ddabab9137124d3f29c2542734e60218a8d3498a2310679fc248: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:58:35.345275 containerd[1595]: time="2025-11-04T04:58:35.345202611Z" level=info msg="CreateContainer within sandbox \"b96b9a10032116d1c42c4e06be6ed9eb7d4e9ec8ba24e363aa5c532035a2d745\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"69596439f3a1ddabab9137124d3f29c2542734e60218a8d3498a2310679fc248\"" Nov 4 04:58:35.346205 containerd[1595]: time="2025-11-04T04:58:35.346166091Z" level=info msg="StartContainer for \"69596439f3a1ddabab9137124d3f29c2542734e60218a8d3498a2310679fc248\"" Nov 4 04:58:35.348115 containerd[1595]: time="2025-11-04T04:58:35.348080724Z" level=info msg="connecting to shim 69596439f3a1ddabab9137124d3f29c2542734e60218a8d3498a2310679fc248" address="unix:///run/containerd/s/256aca193df591508f5b9a2dc0466fcd8601156725a23b91f5244bf4b38d75c2" protocol=ttrpc version=3 Nov 4 04:58:35.376646 systemd[1]: Started cri-containerd-ee3bace4dc78262fb33b040275a68263d1546f482c55e1d72ecb0429ee03f04e.scope - libcontainer container ee3bace4dc78262fb33b040275a68263d1546f482c55e1d72ecb0429ee03f04e. Nov 4 04:58:35.384446 systemd[1]: Started cri-containerd-69596439f3a1ddabab9137124d3f29c2542734e60218a8d3498a2310679fc248.scope - libcontainer container 69596439f3a1ddabab9137124d3f29c2542734e60218a8d3498a2310679fc248. Nov 4 04:58:35.461367 containerd[1595]: time="2025-11-04T04:58:35.460726722Z" level=info msg="StartContainer for \"69596439f3a1ddabab9137124d3f29c2542734e60218a8d3498a2310679fc248\" returns successfully" Nov 4 04:58:35.466875 containerd[1595]: time="2025-11-04T04:58:35.466804749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hmnc2,Uid:b8163e61-1e38-4b0d-9d2a-9178d23be890,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ee3bace4dc78262fb33b040275a68263d1546f482c55e1d72ecb0429ee03f04e\"" Nov 4 04:58:35.469610 containerd[1595]: time="2025-11-04T04:58:35.469567171Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 4 04:58:35.471239 systemd-resolved[1280]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Nov 4 04:58:35.967117 kubelet[2748]: E1104 04:58:35.967083 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:36.329743 systemd-timesyncd[1458]: Contacted time server 50.218.103.254:123 (2.flatcar.pool.ntp.org). Nov 4 04:58:36.330327 systemd-timesyncd[1458]: Initial clock synchronization to Tue 2025-11-04 04:58:36.476017 UTC. Nov 4 04:58:36.400444 kubelet[2748]: E1104 04:58:36.400186 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:36.416818 kubelet[2748]: I1104 04:58:36.416753 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t7k4l" podStartSLOduration=2.416731132 podStartE2EDuration="2.416731132s" podCreationTimestamp="2025-11-04 04:58:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:58:35.98056635 +0000 UTC m=+6.273553630" watchObservedRunningTime="2025-11-04 04:58:36.416731132 +0000 UTC m=+6.709718432" Nov 4 04:58:36.882898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2227034861.mount: Deactivated successfully. Nov 4 04:58:36.952870 kubelet[2748]: E1104 04:58:36.952779 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:36.970194 kubelet[2748]: E1104 04:58:36.969763 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:36.970194 kubelet[2748]: E1104 04:58:36.969805 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:37.403681 kubelet[2748]: E1104 04:58:37.403318 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:37.971721 kubelet[2748]: E1104 04:58:37.971677 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:37.972971 kubelet[2748]: E1104 04:58:37.972148 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:38.647725 containerd[1595]: time="2025-11-04T04:58:38.647653252Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:38.648742 containerd[1595]: time="2025-11-04T04:58:38.648491493Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Nov 4 04:58:38.649314 containerd[1595]: time="2025-11-04T04:58:38.649282769Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:38.651186 containerd[1595]: time="2025-11-04T04:58:38.651154096Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:38.652236 containerd[1595]: time="2025-11-04T04:58:38.651759041Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.182141162s" Nov 4 04:58:38.652236 containerd[1595]: time="2025-11-04T04:58:38.651792576Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 4 04:58:38.655519 containerd[1595]: time="2025-11-04T04:58:38.655484373Z" level=info msg="CreateContainer within sandbox \"ee3bace4dc78262fb33b040275a68263d1546f482c55e1d72ecb0429ee03f04e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 4 04:58:38.667091 containerd[1595]: time="2025-11-04T04:58:38.664482366Z" level=info msg="Container 293e5ffa58f3a20b4fd95cb7234eae55d19245bc13c2fc0733b98283ac152ae0: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:58:38.675715 containerd[1595]: time="2025-11-04T04:58:38.675664944Z" level=info msg="CreateContainer within sandbox \"ee3bace4dc78262fb33b040275a68263d1546f482c55e1d72ecb0429ee03f04e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"293e5ffa58f3a20b4fd95cb7234eae55d19245bc13c2fc0733b98283ac152ae0\"" Nov 4 04:58:38.676912 containerd[1595]: time="2025-11-04T04:58:38.676872969Z" level=info msg="StartContainer for \"293e5ffa58f3a20b4fd95cb7234eae55d19245bc13c2fc0733b98283ac152ae0\"" Nov 4 04:58:38.679116 containerd[1595]: time="2025-11-04T04:58:38.679076481Z" level=info msg="connecting to shim 293e5ffa58f3a20b4fd95cb7234eae55d19245bc13c2fc0733b98283ac152ae0" address="unix:///run/containerd/s/e591c490304336c12ccaf7493ac56fa6b933cbdc2e5c34590e11747e4735d48b" protocol=ttrpc version=3 Nov 4 04:58:38.712666 systemd[1]: Started cri-containerd-293e5ffa58f3a20b4fd95cb7234eae55d19245bc13c2fc0733b98283ac152ae0.scope - libcontainer container 293e5ffa58f3a20b4fd95cb7234eae55d19245bc13c2fc0733b98283ac152ae0. Nov 4 04:58:38.750246 containerd[1595]: time="2025-11-04T04:58:38.750140765Z" level=info msg="StartContainer for \"293e5ffa58f3a20b4fd95cb7234eae55d19245bc13c2fc0733b98283ac152ae0\" returns successfully" Nov 4 04:58:45.728164 update_engine[1569]: I20251104 04:58:45.727957 1569 update_attempter.cc:509] Updating boot flags... Nov 4 04:58:45.888642 sudo[1825]: pam_unix(sudo:session): session closed for user root Nov 4 04:58:45.896423 sshd[1824]: Connection closed by 147.75.109.163 port 45596 Nov 4 04:58:45.897241 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:45.910877 systemd[1]: sshd@6-164.92.104.185:22-147.75.109.163:45596.service: Deactivated successfully. Nov 4 04:58:45.918055 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 04:58:45.921894 systemd[1]: session-7.scope: Consumed 5.233s CPU time, 157.4M memory peak. Nov 4 04:58:45.927218 systemd-logind[1566]: Session 7 logged out. Waiting for processes to exit. Nov 4 04:58:45.954324 systemd-logind[1566]: Removed session 7. Nov 4 04:58:51.667117 kubelet[2748]: I1104 04:58:51.667048 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-hmnc2" podStartSLOduration=14.482828622 podStartE2EDuration="17.667030023s" podCreationTimestamp="2025-11-04 04:58:34 +0000 UTC" firstStartedPulling="2025-11-04 04:58:35.468893959 +0000 UTC m=+5.761881233" lastFinishedPulling="2025-11-04 04:58:38.653095358 +0000 UTC m=+8.946082634" observedRunningTime="2025-11-04 04:58:38.99224781 +0000 UTC m=+9.285235095" watchObservedRunningTime="2025-11-04 04:58:51.667030023 +0000 UTC m=+21.960017351" Nov 4 04:58:51.677893 systemd[1]: Created slice kubepods-besteffort-pod36786569_bfc9_4465_9ead_4dc5bf6f54a2.slice - libcontainer container kubepods-besteffort-pod36786569_bfc9_4465_9ead_4dc5bf6f54a2.slice. Nov 4 04:58:51.729421 kubelet[2748]: I1104 04:58:51.729352 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36786569-bfc9-4465-9ead-4dc5bf6f54a2-tigera-ca-bundle\") pod \"calico-typha-6d46b79799-fxwg2\" (UID: \"36786569-bfc9-4465-9ead-4dc5bf6f54a2\") " pod="calico-system/calico-typha-6d46b79799-fxwg2" Nov 4 04:58:51.729421 kubelet[2748]: I1104 04:58:51.729433 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c96r\" (UniqueName: \"kubernetes.io/projected/36786569-bfc9-4465-9ead-4dc5bf6f54a2-kube-api-access-2c96r\") pod \"calico-typha-6d46b79799-fxwg2\" (UID: \"36786569-bfc9-4465-9ead-4dc5bf6f54a2\") " pod="calico-system/calico-typha-6d46b79799-fxwg2" Nov 4 04:58:51.729630 kubelet[2748]: I1104 04:58:51.729460 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/36786569-bfc9-4465-9ead-4dc5bf6f54a2-typha-certs\") pod \"calico-typha-6d46b79799-fxwg2\" (UID: \"36786569-bfc9-4465-9ead-4dc5bf6f54a2\") " pod="calico-system/calico-typha-6d46b79799-fxwg2" Nov 4 04:58:51.928947 systemd[1]: Created slice kubepods-besteffort-podf5026b47_16c1_4cdf_9bbe_408242387571.slice - libcontainer container kubepods-besteffort-podf5026b47_16c1_4cdf_9bbe_408242387571.slice. Nov 4 04:58:51.987357 kubelet[2748]: E1104 04:58:51.987306 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:51.989252 containerd[1595]: time="2025-11-04T04:58:51.989203596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d46b79799-fxwg2,Uid:36786569-bfc9-4465-9ead-4dc5bf6f54a2,Namespace:calico-system,Attempt:0,}" Nov 4 04:58:52.031256 containerd[1595]: time="2025-11-04T04:58:52.030593353Z" level=info msg="connecting to shim 74e1f6bc504011895c99136ce0c0a811dd717ae774144d1d0eaea72cb538ca3f" address="unix:///run/containerd/s/c1b4158e345be664e6c4976232ab18e5ada25d56cb18d73d232f28fb76383e79" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:58:52.031484 kubelet[2748]: I1104 04:58:52.031371 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f5026b47-16c1-4cdf-9bbe-408242387571-cni-net-dir\") pod \"calico-node-2l56r\" (UID: \"f5026b47-16c1-4cdf-9bbe-408242387571\") " pod="calico-system/calico-node-2l56r" Nov 4 04:58:52.031484 kubelet[2748]: I1104 04:58:52.031443 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ct4v\" (UniqueName: \"kubernetes.io/projected/f5026b47-16c1-4cdf-9bbe-408242387571-kube-api-access-8ct4v\") pod \"calico-node-2l56r\" (UID: \"f5026b47-16c1-4cdf-9bbe-408242387571\") " pod="calico-system/calico-node-2l56r" Nov 4 04:58:52.031484 kubelet[2748]: I1104 04:58:52.031467 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f5026b47-16c1-4cdf-9bbe-408242387571-flexvol-driver-host\") pod \"calico-node-2l56r\" (UID: \"f5026b47-16c1-4cdf-9bbe-408242387571\") " pod="calico-system/calico-node-2l56r" Nov 4 04:58:52.031484 kubelet[2748]: I1104 04:58:52.031485 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f5026b47-16c1-4cdf-9bbe-408242387571-node-certs\") pod \"calico-node-2l56r\" (UID: \"f5026b47-16c1-4cdf-9bbe-408242387571\") " pod="calico-system/calico-node-2l56r" Nov 4 04:58:52.031678 kubelet[2748]: I1104 04:58:52.031502 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f5026b47-16c1-4cdf-9bbe-408242387571-policysync\") pod \"calico-node-2l56r\" (UID: \"f5026b47-16c1-4cdf-9bbe-408242387571\") " pod="calico-system/calico-node-2l56r" Nov 4 04:58:52.031678 kubelet[2748]: I1104 04:58:52.031519 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5026b47-16c1-4cdf-9bbe-408242387571-tigera-ca-bundle\") pod \"calico-node-2l56r\" (UID: \"f5026b47-16c1-4cdf-9bbe-408242387571\") " pod="calico-system/calico-node-2l56r" Nov 4 04:58:52.031678 kubelet[2748]: I1104 04:58:52.031538 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f5026b47-16c1-4cdf-9bbe-408242387571-cni-bin-dir\") pod \"calico-node-2l56r\" (UID: \"f5026b47-16c1-4cdf-9bbe-408242387571\") " pod="calico-system/calico-node-2l56r" Nov 4 04:58:52.031678 kubelet[2748]: I1104 04:58:52.031553 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f5026b47-16c1-4cdf-9bbe-408242387571-cni-log-dir\") pod \"calico-node-2l56r\" (UID: \"f5026b47-16c1-4cdf-9bbe-408242387571\") " pod="calico-system/calico-node-2l56r" Nov 4 04:58:52.031678 kubelet[2748]: I1104 04:58:52.031566 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5026b47-16c1-4cdf-9bbe-408242387571-lib-modules\") pod \"calico-node-2l56r\" (UID: \"f5026b47-16c1-4cdf-9bbe-408242387571\") " pod="calico-system/calico-node-2l56r" Nov 4 04:58:52.031804 kubelet[2748]: I1104 04:58:52.031583 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f5026b47-16c1-4cdf-9bbe-408242387571-var-lib-calico\") pod \"calico-node-2l56r\" (UID: \"f5026b47-16c1-4cdf-9bbe-408242387571\") " pod="calico-system/calico-node-2l56r" Nov 4 04:58:52.031804 kubelet[2748]: I1104 04:58:52.031597 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f5026b47-16c1-4cdf-9bbe-408242387571-var-run-calico\") pod \"calico-node-2l56r\" (UID: \"f5026b47-16c1-4cdf-9bbe-408242387571\") " pod="calico-system/calico-node-2l56r" Nov 4 04:58:52.031804 kubelet[2748]: I1104 04:58:52.031614 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5026b47-16c1-4cdf-9bbe-408242387571-xtables-lock\") pod \"calico-node-2l56r\" (UID: \"f5026b47-16c1-4cdf-9bbe-408242387571\") " pod="calico-system/calico-node-2l56r" Nov 4 04:58:52.100805 systemd[1]: Started cri-containerd-74e1f6bc504011895c99136ce0c0a811dd717ae774144d1d0eaea72cb538ca3f.scope - libcontainer container 74e1f6bc504011895c99136ce0c0a811dd717ae774144d1d0eaea72cb538ca3f. Nov 4 04:58:52.127176 kubelet[2748]: E1104 04:58:52.127121 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vv5n9" podUID="092f500e-4822-4935-b64c-fa41aafe316d" Nov 4 04:58:52.144355 kubelet[2748]: E1104 04:58:52.144317 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.144355 kubelet[2748]: W1104 04:58:52.144347 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.144997 kubelet[2748]: E1104 04:58:52.144966 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.173411 kubelet[2748]: E1104 04:58:52.173304 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.173411 kubelet[2748]: W1104 04:58:52.173337 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.173411 kubelet[2748]: E1104 04:58:52.173360 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.204772 kubelet[2748]: E1104 04:58:52.204728 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.204772 kubelet[2748]: W1104 04:58:52.204755 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.204772 kubelet[2748]: E1104 04:58:52.204779 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.205116 kubelet[2748]: E1104 04:58:52.204986 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.205116 kubelet[2748]: W1104 04:58:52.204997 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.205116 kubelet[2748]: E1104 04:58:52.205010 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.205628 kubelet[2748]: E1104 04:58:52.205609 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.205628 kubelet[2748]: W1104 04:58:52.205626 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.205716 kubelet[2748]: E1104 04:58:52.205643 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.205916 kubelet[2748]: E1104 04:58:52.205884 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.205916 kubelet[2748]: W1104 04:58:52.205897 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.205916 kubelet[2748]: E1104 04:58:52.205908 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.207931 kubelet[2748]: E1104 04:58:52.207902 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.207931 kubelet[2748]: W1104 04:58:52.207920 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.207931 kubelet[2748]: E1104 04:58:52.207935 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.208522 kubelet[2748]: E1104 04:58:52.208507 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.208522 kubelet[2748]: W1104 04:58:52.208521 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.208591 kubelet[2748]: E1104 04:58:52.208535 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.211201 kubelet[2748]: E1104 04:58:52.211180 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.211293 kubelet[2748]: W1104 04:58:52.211197 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.211293 kubelet[2748]: E1104 04:58:52.211258 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.211781 kubelet[2748]: E1104 04:58:52.211760 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.211781 kubelet[2748]: W1104 04:58:52.211776 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.211874 kubelet[2748]: E1104 04:58:52.211790 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.212354 kubelet[2748]: E1104 04:58:52.212251 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.212354 kubelet[2748]: W1104 04:58:52.212269 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.212354 kubelet[2748]: E1104 04:58:52.212284 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.213416 kubelet[2748]: E1104 04:58:52.213335 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.213416 kubelet[2748]: W1104 04:58:52.213350 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.213416 kubelet[2748]: E1104 04:58:52.213366 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.213804 kubelet[2748]: E1104 04:58:52.213699 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.213804 kubelet[2748]: W1104 04:58:52.213711 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.213804 kubelet[2748]: E1104 04:58:52.213723 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.213975 kubelet[2748]: E1104 04:58:52.213965 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.214122 kubelet[2748]: W1104 04:58:52.214019 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.214122 kubelet[2748]: E1104 04:58:52.214032 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.214286 kubelet[2748]: E1104 04:58:52.214276 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.214345 kubelet[2748]: W1104 04:58:52.214336 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.214479 kubelet[2748]: E1104 04:58:52.214406 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.214617 kubelet[2748]: E1104 04:58:52.214607 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.214671 kubelet[2748]: W1104 04:58:52.214663 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.214801 kubelet[2748]: E1104 04:58:52.214718 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.214940 kubelet[2748]: E1104 04:58:52.214930 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.215003 kubelet[2748]: W1104 04:58:52.214993 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.215148 kubelet[2748]: E1104 04:58:52.215045 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.215508 kubelet[2748]: E1104 04:58:52.215493 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.215835 kubelet[2748]: W1104 04:58:52.215574 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.215835 kubelet[2748]: E1104 04:58:52.215590 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.216060 kubelet[2748]: E1104 04:58:52.216047 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.216212 kubelet[2748]: W1104 04:58:52.216145 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.216212 kubelet[2748]: E1104 04:58:52.216165 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.216582 kubelet[2748]: E1104 04:58:52.216570 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.216725 kubelet[2748]: W1104 04:58:52.216713 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.216792 kubelet[2748]: E1104 04:58:52.216784 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.217224 kubelet[2748]: E1104 04:58:52.217127 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.217224 kubelet[2748]: W1104 04:58:52.217139 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.217224 kubelet[2748]: E1104 04:58:52.217152 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.217661 kubelet[2748]: E1104 04:58:52.217510 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.217661 kubelet[2748]: W1104 04:58:52.217521 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.217661 kubelet[2748]: E1104 04:58:52.217532 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.233644 kubelet[2748]: E1104 04:58:52.233587 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:52.234709 kubelet[2748]: E1104 04:58:52.234583 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.234709 kubelet[2748]: W1104 04:58:52.234610 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.235300 containerd[1595]: time="2025-11-04T04:58:52.234937638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2l56r,Uid:f5026b47-16c1-4cdf-9bbe-408242387571,Namespace:calico-system,Attempt:0,}" Nov 4 04:58:52.235587 kubelet[2748]: E1104 04:58:52.235302 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.235587 kubelet[2748]: I1104 04:58:52.235351 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptmjs\" (UniqueName: \"kubernetes.io/projected/092f500e-4822-4935-b64c-fa41aafe316d-kube-api-access-ptmjs\") pod \"csi-node-driver-vv5n9\" (UID: \"092f500e-4822-4935-b64c-fa41aafe316d\") " pod="calico-system/csi-node-driver-vv5n9" Nov 4 04:58:52.236195 kubelet[2748]: E1104 04:58:52.236167 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.236447 kubelet[2748]: W1104 04:58:52.236185 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.236447 kubelet[2748]: E1104 04:58:52.236249 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.236447 kubelet[2748]: I1104 04:58:52.236278 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/092f500e-4822-4935-b64c-fa41aafe316d-registration-dir\") pod \"csi-node-driver-vv5n9\" (UID: \"092f500e-4822-4935-b64c-fa41aafe316d\") " pod="calico-system/csi-node-driver-vv5n9" Nov 4 04:58:52.237035 kubelet[2748]: E1104 04:58:52.236812 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.237035 kubelet[2748]: W1104 04:58:52.236828 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.237035 kubelet[2748]: E1104 04:58:52.236851 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.237415 kubelet[2748]: E1104 04:58:52.237376 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.237719 kubelet[2748]: W1104 04:58:52.237507 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.237719 kubelet[2748]: E1104 04:58:52.237533 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.237960 kubelet[2748]: E1104 04:58:52.237947 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.238421 kubelet[2748]: W1104 04:58:52.238036 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.238421 kubelet[2748]: E1104 04:58:52.238059 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.238421 kubelet[2748]: I1104 04:58:52.238087 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/092f500e-4822-4935-b64c-fa41aafe316d-varrun\") pod \"csi-node-driver-vv5n9\" (UID: \"092f500e-4822-4935-b64c-fa41aafe316d\") " pod="calico-system/csi-node-driver-vv5n9" Nov 4 04:58:52.239825 kubelet[2748]: E1104 04:58:52.239799 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.239825 kubelet[2748]: W1104 04:58:52.239821 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.240053 kubelet[2748]: E1104 04:58:52.239857 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.240339 kubelet[2748]: E1104 04:58:52.240319 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.240380 kubelet[2748]: W1104 04:58:52.240337 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.240656 kubelet[2748]: E1104 04:58:52.240633 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.241422 kubelet[2748]: E1104 04:58:52.240819 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.241524 kubelet[2748]: W1104 04:58:52.241505 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.241658 kubelet[2748]: E1104 04:58:52.241632 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.241923 kubelet[2748]: I1104 04:58:52.241676 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/092f500e-4822-4935-b64c-fa41aafe316d-kubelet-dir\") pod \"csi-node-driver-vv5n9\" (UID: \"092f500e-4822-4935-b64c-fa41aafe316d\") " pod="calico-system/csi-node-driver-vv5n9" Nov 4 04:58:52.241923 kubelet[2748]: E1104 04:58:52.241874 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.241923 kubelet[2748]: W1104 04:58:52.241885 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.241923 kubelet[2748]: E1104 04:58:52.241897 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.242210 kubelet[2748]: E1104 04:58:52.242193 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.242210 kubelet[2748]: W1104 04:58:52.242208 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.242614 kubelet[2748]: E1104 04:58:52.242590 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.243018 kubelet[2748]: I1104 04:58:52.242624 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/092f500e-4822-4935-b64c-fa41aafe316d-socket-dir\") pod \"csi-node-driver-vv5n9\" (UID: \"092f500e-4822-4935-b64c-fa41aafe316d\") " pod="calico-system/csi-node-driver-vv5n9" Nov 4 04:58:52.243301 kubelet[2748]: E1104 04:58:52.243277 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.243301 kubelet[2748]: W1104 04:58:52.243293 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.243557 kubelet[2748]: E1104 04:58:52.243306 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.245478 kubelet[2748]: E1104 04:58:52.244019 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.245478 kubelet[2748]: W1104 04:58:52.245478 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.245605 kubelet[2748]: E1104 04:58:52.245518 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.246173 kubelet[2748]: E1104 04:58:52.246147 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.246173 kubelet[2748]: W1104 04:58:52.246164 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.246478 kubelet[2748]: E1104 04:58:52.246449 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.246478 kubelet[2748]: W1104 04:58:52.246459 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.246478 kubelet[2748]: E1104 04:58:52.246471 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.246478 kubelet[2748]: E1104 04:58:52.246481 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.250552 kubelet[2748]: E1104 04:58:52.250509 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.250552 kubelet[2748]: W1104 04:58:52.250539 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.250552 kubelet[2748]: E1104 04:58:52.250563 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.343910 containerd[1595]: time="2025-11-04T04:58:52.343841958Z" level=info msg="connecting to shim 7cbec958e4630568ad3fd6259ce9bfb0fb940e23a311d83c2d6a4e0fd97e7f21" address="unix:///run/containerd/s/b0c023cef52d0cd886ae06a20b2dbd5d82ebe7b93fc7d7e6d40f52c044588d5f" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:58:52.345751 kubelet[2748]: E1104 04:58:52.344829 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.345751 kubelet[2748]: W1104 04:58:52.345705 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.345926 kubelet[2748]: E1104 04:58:52.345737 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.347414 kubelet[2748]: E1104 04:58:52.346337 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.347774 kubelet[2748]: W1104 04:58:52.347592 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.347774 kubelet[2748]: E1104 04:58:52.347741 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.348505 kubelet[2748]: E1104 04:58:52.348288 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.348505 kubelet[2748]: W1104 04:58:52.348302 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.348505 kubelet[2748]: E1104 04:58:52.348339 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.349331 kubelet[2748]: E1104 04:58:52.349305 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.350464 kubelet[2748]: W1104 04:58:52.349485 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.350464 kubelet[2748]: E1104 04:58:52.349538 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.350991 kubelet[2748]: E1104 04:58:52.350885 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.350991 kubelet[2748]: W1104 04:58:52.350952 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.351071 kubelet[2748]: E1104 04:58:52.350991 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.352689 kubelet[2748]: E1104 04:58:52.352503 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.352689 kubelet[2748]: W1104 04:58:52.352622 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.352689 kubelet[2748]: E1104 04:58:52.352656 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.353136 kubelet[2748]: E1104 04:58:52.353123 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.353356 kubelet[2748]: W1104 04:58:52.353288 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.353356 kubelet[2748]: E1104 04:58:52.353321 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.354218 kubelet[2748]: E1104 04:58:52.354111 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.354327 kubelet[2748]: W1104 04:58:52.354308 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.354561 kubelet[2748]: E1104 04:58:52.354472 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.355734 kubelet[2748]: E1104 04:58:52.355671 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.355734 kubelet[2748]: W1104 04:58:52.355686 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.355734 kubelet[2748]: E1104 04:58:52.355714 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.356734 kubelet[2748]: E1104 04:58:52.356619 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.357109 kubelet[2748]: W1104 04:58:52.356815 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.357109 kubelet[2748]: E1104 04:58:52.356852 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.358929 kubelet[2748]: E1104 04:58:52.357657 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.358929 kubelet[2748]: W1104 04:58:52.357860 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.358929 kubelet[2748]: E1104 04:58:52.357900 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.359458 kubelet[2748]: E1104 04:58:52.359442 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.359559 kubelet[2748]: W1104 04:58:52.359531 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.359744 kubelet[2748]: E1104 04:58:52.359658 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.360488 kubelet[2748]: E1104 04:58:52.360472 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.360670 kubelet[2748]: W1104 04:58:52.360563 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.360670 kubelet[2748]: E1104 04:58:52.360606 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.361907 kubelet[2748]: E1104 04:58:52.361514 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.361907 kubelet[2748]: W1104 04:58:52.361531 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.362116 kubelet[2748]: E1104 04:58:52.361908 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.362971 kubelet[2748]: E1104 04:58:52.362544 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.362971 kubelet[2748]: W1104 04:58:52.362562 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.362971 kubelet[2748]: E1104 04:58:52.362620 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.362971 kubelet[2748]: E1104 04:58:52.362837 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.362971 kubelet[2748]: W1104 04:58:52.362845 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.362971 kubelet[2748]: E1104 04:58:52.362933 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.364346 kubelet[2748]: E1104 04:58:52.363513 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.364346 kubelet[2748]: W1104 04:58:52.363524 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.364346 kubelet[2748]: E1104 04:58:52.363618 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.364346 kubelet[2748]: E1104 04:58:52.363972 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.364346 kubelet[2748]: W1104 04:58:52.364001 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.364346 kubelet[2748]: E1104 04:58:52.364075 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.364346 kubelet[2748]: E1104 04:58:52.364345 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.364574 kubelet[2748]: W1104 04:58:52.364355 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.364574 kubelet[2748]: E1104 04:58:52.364423 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.364740 kubelet[2748]: E1104 04:58:52.364710 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.364740 kubelet[2748]: W1104 04:58:52.364721 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.365908 kubelet[2748]: E1104 04:58:52.365799 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.365908 kubelet[2748]: E1104 04:58:52.365874 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.365908 kubelet[2748]: W1104 04:58:52.365884 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.366032 kubelet[2748]: E1104 04:58:52.365968 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.366870 kubelet[2748]: E1104 04:58:52.366787 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.366870 kubelet[2748]: W1104 04:58:52.366807 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.367153 kubelet[2748]: E1104 04:58:52.366891 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.367684 kubelet[2748]: E1104 04:58:52.367552 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.367684 kubelet[2748]: W1104 04:58:52.367569 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.367684 kubelet[2748]: E1104 04:58:52.367649 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.368491 containerd[1595]: time="2025-11-04T04:58:52.367556057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d46b79799-fxwg2,Uid:36786569-bfc9-4465-9ead-4dc5bf6f54a2,Namespace:calico-system,Attempt:0,} returns sandbox id \"74e1f6bc504011895c99136ce0c0a811dd717ae774144d1d0eaea72cb538ca3f\"" Nov 4 04:58:52.368550 kubelet[2748]: E1104 04:58:52.368489 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.368550 kubelet[2748]: W1104 04:58:52.368503 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.368975 kubelet[2748]: E1104 04:58:52.368523 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.369894 kubelet[2748]: E1104 04:58:52.369791 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.369894 kubelet[2748]: W1104 04:58:52.369805 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.369894 kubelet[2748]: E1104 04:58:52.369817 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.371284 kubelet[2748]: E1104 04:58:52.371251 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:52.373272 containerd[1595]: time="2025-11-04T04:58:52.373236828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 4 04:58:52.402955 systemd[1]: Started cri-containerd-7cbec958e4630568ad3fd6259ce9bfb0fb940e23a311d83c2d6a4e0fd97e7f21.scope - libcontainer container 7cbec958e4630568ad3fd6259ce9bfb0fb940e23a311d83c2d6a4e0fd97e7f21. Nov 4 04:58:52.406829 kubelet[2748]: E1104 04:58:52.406776 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:52.406829 kubelet[2748]: W1104 04:58:52.406806 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:52.406829 kubelet[2748]: E1104 04:58:52.406830 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:52.527977 containerd[1595]: time="2025-11-04T04:58:52.527469572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2l56r,Uid:f5026b47-16c1-4cdf-9bbe-408242387571,Namespace:calico-system,Attempt:0,} returns sandbox id \"7cbec958e4630568ad3fd6259ce9bfb0fb940e23a311d83c2d6a4e0fd97e7f21\"" Nov 4 04:58:52.530125 kubelet[2748]: E1104 04:58:52.529902 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:53.729150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2451587899.mount: Deactivated successfully. Nov 4 04:58:53.902334 kubelet[2748]: E1104 04:58:53.901146 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vv5n9" podUID="092f500e-4822-4935-b64c-fa41aafe316d" Nov 4 04:58:54.593550 containerd[1595]: time="2025-11-04T04:58:54.593496203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:54.594744 containerd[1595]: time="2025-11-04T04:58:54.594712528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Nov 4 04:58:54.595084 containerd[1595]: time="2025-11-04T04:58:54.595050591Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:54.597618 containerd[1595]: time="2025-11-04T04:58:54.597535569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:54.598194 containerd[1595]: time="2025-11-04T04:58:54.597830206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.224557004s" Nov 4 04:58:54.598194 containerd[1595]: time="2025-11-04T04:58:54.597862526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 4 04:58:54.599267 containerd[1595]: time="2025-11-04T04:58:54.599244058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 4 04:58:54.636912 containerd[1595]: time="2025-11-04T04:58:54.636442219Z" level=info msg="CreateContainer within sandbox \"74e1f6bc504011895c99136ce0c0a811dd717ae774144d1d0eaea72cb538ca3f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 4 04:58:54.648420 containerd[1595]: time="2025-11-04T04:58:54.647165818Z" level=info msg="Container 0d618e008b142d33f7bd956a29e6dbf01e1145eb49751b5c407b6cd2665ee2d4: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:58:54.655280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2032456848.mount: Deactivated successfully. Nov 4 04:58:54.684039 containerd[1595]: time="2025-11-04T04:58:54.683977790Z" level=info msg="CreateContainer within sandbox \"74e1f6bc504011895c99136ce0c0a811dd717ae774144d1d0eaea72cb538ca3f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0d618e008b142d33f7bd956a29e6dbf01e1145eb49751b5c407b6cd2665ee2d4\"" Nov 4 04:58:54.685517 containerd[1595]: time="2025-11-04T04:58:54.685481898Z" level=info msg="StartContainer for \"0d618e008b142d33f7bd956a29e6dbf01e1145eb49751b5c407b6cd2665ee2d4\"" Nov 4 04:58:54.687421 containerd[1595]: time="2025-11-04T04:58:54.686973334Z" level=info msg="connecting to shim 0d618e008b142d33f7bd956a29e6dbf01e1145eb49751b5c407b6cd2665ee2d4" address="unix:///run/containerd/s/c1b4158e345be664e6c4976232ab18e5ada25d56cb18d73d232f28fb76383e79" protocol=ttrpc version=3 Nov 4 04:58:54.718682 systemd[1]: Started cri-containerd-0d618e008b142d33f7bd956a29e6dbf01e1145eb49751b5c407b6cd2665ee2d4.scope - libcontainer container 0d618e008b142d33f7bd956a29e6dbf01e1145eb49751b5c407b6cd2665ee2d4. Nov 4 04:58:54.803015 containerd[1595]: time="2025-11-04T04:58:54.802966959Z" level=info msg="StartContainer for \"0d618e008b142d33f7bd956a29e6dbf01e1145eb49751b5c407b6cd2665ee2d4\" returns successfully" Nov 4 04:58:55.034028 kubelet[2748]: E1104 04:58:55.033993 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:55.040420 kubelet[2748]: E1104 04:58:55.040353 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.040814 kubelet[2748]: W1104 04:58:55.040623 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.040814 kubelet[2748]: E1104 04:58:55.040672 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.041464 kubelet[2748]: E1104 04:58:55.040999 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.041798 kubelet[2748]: W1104 04:58:55.041603 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.041798 kubelet[2748]: E1104 04:58:55.041642 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.042505 kubelet[2748]: E1104 04:58:55.042486 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.042626 kubelet[2748]: W1104 04:58:55.042606 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.042725 kubelet[2748]: E1104 04:58:55.042709 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.043659 kubelet[2748]: E1104 04:58:55.043483 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.043659 kubelet[2748]: W1104 04:58:55.043500 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.043659 kubelet[2748]: E1104 04:58:55.043514 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.044684 kubelet[2748]: E1104 04:58:55.044667 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.044781 kubelet[2748]: W1104 04:58:55.044765 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.044965 kubelet[2748]: E1104 04:58:55.044848 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.045112 kubelet[2748]: E1104 04:58:55.045097 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.045199 kubelet[2748]: W1104 04:58:55.045184 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.046491 kubelet[2748]: E1104 04:58:55.046451 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.047080 kubelet[2748]: E1104 04:58:55.046915 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.047080 kubelet[2748]: W1104 04:58:55.046936 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.047080 kubelet[2748]: E1104 04:58:55.046954 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.047348 kubelet[2748]: E1104 04:58:55.047332 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.047490 kubelet[2748]: W1104 04:58:55.047472 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.047592 kubelet[2748]: E1104 04:58:55.047576 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.048069 kubelet[2748]: E1104 04:58:55.047935 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.048069 kubelet[2748]: W1104 04:58:55.047953 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.048069 kubelet[2748]: E1104 04:58:55.047968 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.048667 kubelet[2748]: E1104 04:58:55.048651 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.050468 kubelet[2748]: W1104 04:58:55.050437 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.050693 kubelet[2748]: E1104 04:58:55.050571 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.050846 kubelet[2748]: E1104 04:58:55.050833 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.050935 kubelet[2748]: W1104 04:58:55.050924 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.050988 kubelet[2748]: E1104 04:58:55.050980 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.051450 kubelet[2748]: E1104 04:58:55.051298 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.051450 kubelet[2748]: W1104 04:58:55.051312 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.051450 kubelet[2748]: E1104 04:58:55.051324 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.051615 kubelet[2748]: E1104 04:58:55.051605 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.051770 kubelet[2748]: W1104 04:58:55.051666 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.051770 kubelet[2748]: E1104 04:58:55.051679 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.051886 kubelet[2748]: E1104 04:58:55.051876 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.051940 kubelet[2748]: W1104 04:58:55.051931 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.051982 kubelet[2748]: E1104 04:58:55.051975 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.052327 kubelet[2748]: E1104 04:58:55.052227 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.052327 kubelet[2748]: W1104 04:58:55.052242 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.052327 kubelet[2748]: E1104 04:58:55.052257 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.078377 kubelet[2748]: E1104 04:58:55.078342 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.078754 kubelet[2748]: W1104 04:58:55.078556 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.078754 kubelet[2748]: E1104 04:58:55.078588 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.078904 kubelet[2748]: E1104 04:58:55.078893 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.078970 kubelet[2748]: W1104 04:58:55.078960 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.079034 kubelet[2748]: E1104 04:58:55.079025 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.079333 kubelet[2748]: E1104 04:58:55.079275 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.079333 kubelet[2748]: W1104 04:58:55.079303 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.079333 kubelet[2748]: E1104 04:58:55.079333 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.080725 kubelet[2748]: E1104 04:58:55.080681 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.080725 kubelet[2748]: W1104 04:58:55.080716 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.080940 kubelet[2748]: E1104 04:58:55.080916 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.081002 kubelet[2748]: E1104 04:58:55.080991 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.081049 kubelet[2748]: W1104 04:58:55.081002 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.081049 kubelet[2748]: E1104 04:58:55.081017 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.082523 kubelet[2748]: E1104 04:58:55.082498 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.082890 kubelet[2748]: W1104 04:58:55.082655 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.082890 kubelet[2748]: E1104 04:58:55.082693 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.083059 kubelet[2748]: E1104 04:58:55.083042 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.083129 kubelet[2748]: W1104 04:58:55.083113 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.083213 kubelet[2748]: E1104 04:58:55.083190 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.083573 kubelet[2748]: E1104 04:58:55.083434 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.083573 kubelet[2748]: W1104 04:58:55.083447 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.083573 kubelet[2748]: E1104 04:58:55.083478 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.083883 kubelet[2748]: E1104 04:58:55.083870 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.083942 kubelet[2748]: W1104 04:58:55.083933 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.084020 kubelet[2748]: E1104 04:58:55.084000 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.084195 kubelet[2748]: E1104 04:58:55.084185 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.084443 kubelet[2748]: W1104 04:58:55.084425 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.084540 kubelet[2748]: E1104 04:58:55.084526 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.084849 kubelet[2748]: E1104 04:58:55.084829 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.084849 kubelet[2748]: W1104 04:58:55.084846 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.084929 kubelet[2748]: E1104 04:58:55.084863 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.085638 kubelet[2748]: E1104 04:58:55.085620 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.085638 kubelet[2748]: W1104 04:58:55.085635 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.085741 kubelet[2748]: E1104 04:58:55.085715 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.085863 kubelet[2748]: E1104 04:58:55.085850 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.085863 kubelet[2748]: W1104 04:58:55.085859 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.086037 kubelet[2748]: E1104 04:58:55.086021 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.086219 kubelet[2748]: E1104 04:58:55.086206 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.086264 kubelet[2748]: W1104 04:58:55.086218 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.086349 kubelet[2748]: E1104 04:58:55.086333 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.087491 kubelet[2748]: E1104 04:58:55.087469 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.087491 kubelet[2748]: W1104 04:58:55.087484 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.087491 kubelet[2748]: E1104 04:58:55.087502 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.087855 kubelet[2748]: E1104 04:58:55.087818 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.087855 kubelet[2748]: W1104 04:58:55.087833 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.087855 kubelet[2748]: E1104 04:58:55.087844 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.090497 kubelet[2748]: E1104 04:58:55.090264 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.090497 kubelet[2748]: W1104 04:58:55.090296 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.090497 kubelet[2748]: E1104 04:58:55.090319 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.090799 kubelet[2748]: E1104 04:58:55.090786 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:55.090881 kubelet[2748]: W1104 04:58:55.090856 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:55.090940 kubelet[2748]: E1104 04:58:55.090930 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:55.898479 kubelet[2748]: E1104 04:58:55.898323 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vv5n9" podUID="092f500e-4822-4935-b64c-fa41aafe316d" Nov 4 04:58:55.906386 containerd[1595]: time="2025-11-04T04:58:55.906336292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:55.907432 containerd[1595]: time="2025-11-04T04:58:55.907136567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Nov 4 04:58:55.909224 containerd[1595]: time="2025-11-04T04:58:55.908780153Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:55.911652 containerd[1595]: time="2025-11-04T04:58:55.911604694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:55.912205 containerd[1595]: time="2025-11-04T04:58:55.912176672Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.312743317s" Nov 4 04:58:55.912205 containerd[1595]: time="2025-11-04T04:58:55.912205709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 4 04:58:55.917723 containerd[1595]: time="2025-11-04T04:58:55.917665105Z" level=info msg="CreateContainer within sandbox \"7cbec958e4630568ad3fd6259ce9bfb0fb940e23a311d83c2d6a4e0fd97e7f21\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 4 04:58:55.947830 containerd[1595]: time="2025-11-04T04:58:55.944899698Z" level=info msg="Container 3b85b01b5ddfbe9ebfec43d2fc35a7767aa91632689b9ca933595eb6445322bb: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:58:55.962590 containerd[1595]: time="2025-11-04T04:58:55.962518937Z" level=info msg="CreateContainer within sandbox \"7cbec958e4630568ad3fd6259ce9bfb0fb940e23a311d83c2d6a4e0fd97e7f21\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3b85b01b5ddfbe9ebfec43d2fc35a7767aa91632689b9ca933595eb6445322bb\"" Nov 4 04:58:55.964418 containerd[1595]: time="2025-11-04T04:58:55.963296375Z" level=info msg="StartContainer for \"3b85b01b5ddfbe9ebfec43d2fc35a7767aa91632689b9ca933595eb6445322bb\"" Nov 4 04:58:55.967522 containerd[1595]: time="2025-11-04T04:58:55.966186307Z" level=info msg="connecting to shim 3b85b01b5ddfbe9ebfec43d2fc35a7767aa91632689b9ca933595eb6445322bb" address="unix:///run/containerd/s/b0c023cef52d0cd886ae06a20b2dbd5d82ebe7b93fc7d7e6d40f52c044588d5f" protocol=ttrpc version=3 Nov 4 04:58:56.002469 systemd[1]: Started cri-containerd-3b85b01b5ddfbe9ebfec43d2fc35a7767aa91632689b9ca933595eb6445322bb.scope - libcontainer container 3b85b01b5ddfbe9ebfec43d2fc35a7767aa91632689b9ca933595eb6445322bb. Nov 4 04:58:56.044446 kubelet[2748]: I1104 04:58:56.043532 2748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 04:58:56.044446 kubelet[2748]: E1104 04:58:56.044021 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:56.059194 kubelet[2748]: E1104 04:58:56.059148 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.059194 kubelet[2748]: W1104 04:58:56.059179 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.059194 kubelet[2748]: E1104 04:58:56.059209 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.059951 kubelet[2748]: E1104 04:58:56.059920 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.059951 kubelet[2748]: W1104 04:58:56.059935 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.059951 kubelet[2748]: E1104 04:58:56.059952 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.061431 kubelet[2748]: E1104 04:58:56.061378 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.061431 kubelet[2748]: W1104 04:58:56.061405 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.061431 kubelet[2748]: E1104 04:58:56.061419 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.062029 kubelet[2748]: E1104 04:58:56.061962 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.062029 kubelet[2748]: W1104 04:58:56.061981 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.062029 kubelet[2748]: E1104 04:58:56.061997 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.062564 kubelet[2748]: E1104 04:58:56.062544 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.062564 kubelet[2748]: W1104 04:58:56.062558 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.062843 kubelet[2748]: E1104 04:58:56.062571 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.062843 kubelet[2748]: E1104 04:58:56.062796 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.062843 kubelet[2748]: W1104 04:58:56.062808 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.062843 kubelet[2748]: E1104 04:58:56.062840 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.063248 kubelet[2748]: E1104 04:58:56.063171 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.063248 kubelet[2748]: W1104 04:58:56.063189 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.063248 kubelet[2748]: E1104 04:58:56.063204 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.063489 kubelet[2748]: E1104 04:58:56.063475 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.063489 kubelet[2748]: W1104 04:58:56.063487 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.063667 kubelet[2748]: E1104 04:58:56.063497 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.063750 kubelet[2748]: E1104 04:58:56.063736 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.063750 kubelet[2748]: W1104 04:58:56.063748 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.064160 kubelet[2748]: E1104 04:58:56.063758 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.064160 kubelet[2748]: E1104 04:58:56.063930 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.064160 kubelet[2748]: W1104 04:58:56.063938 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.064160 kubelet[2748]: E1104 04:58:56.063946 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.064160 kubelet[2748]: E1104 04:58:56.064156 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.064415 kubelet[2748]: W1104 04:58:56.064166 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.064415 kubelet[2748]: E1104 04:58:56.064179 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.064532 kubelet[2748]: E1104 04:58:56.064506 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.064532 kubelet[2748]: W1104 04:58:56.064519 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.064771 kubelet[2748]: E1104 04:58:56.064551 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.064771 kubelet[2748]: E1104 04:58:56.064740 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.064771 kubelet[2748]: W1104 04:58:56.064748 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.064771 kubelet[2748]: E1104 04:58:56.064759 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.065142 kubelet[2748]: E1104 04:58:56.064956 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.065142 kubelet[2748]: W1104 04:58:56.064966 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.065142 kubelet[2748]: E1104 04:58:56.064976 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.065142 kubelet[2748]: E1104 04:58:56.065142 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.065142 kubelet[2748]: W1104 04:58:56.065149 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.065142 kubelet[2748]: E1104 04:58:56.065171 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.079562 containerd[1595]: time="2025-11-04T04:58:56.079239832Z" level=info msg="StartContainer for \"3b85b01b5ddfbe9ebfec43d2fc35a7767aa91632689b9ca933595eb6445322bb\" returns successfully" Nov 4 04:58:56.090173 kubelet[2748]: E1104 04:58:56.090061 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.090173 kubelet[2748]: W1104 04:58:56.090114 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.090632 kubelet[2748]: E1104 04:58:56.090247 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.091205 kubelet[2748]: E1104 04:58:56.091142 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.091446 kubelet[2748]: W1104 04:58:56.091168 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.091543 kubelet[2748]: E1104 04:58:56.091468 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.091978 kubelet[2748]: E1104 04:58:56.091957 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.092166 kubelet[2748]: W1104 04:58:56.091971 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.092340 kubelet[2748]: E1104 04:58:56.092225 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.092809 kubelet[2748]: E1104 04:58:56.092789 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.092809 kubelet[2748]: W1104 04:58:56.092803 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.093082 kubelet[2748]: E1104 04:58:56.092821 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.093356 kubelet[2748]: E1104 04:58:56.093338 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.093356 kubelet[2748]: W1104 04:58:56.093352 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.093801 kubelet[2748]: E1104 04:58:56.093379 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.094124 kubelet[2748]: E1104 04:58:56.094097 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.094450 kubelet[2748]: W1104 04:58:56.094207 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.094450 kubelet[2748]: E1104 04:58:56.094237 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.095426 kubelet[2748]: E1104 04:58:56.095353 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.095426 kubelet[2748]: W1104 04:58:56.095375 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.095693 kubelet[2748]: E1104 04:58:56.095664 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.096958 kubelet[2748]: E1104 04:58:56.096650 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.096958 kubelet[2748]: W1104 04:58:56.096810 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.097129 kubelet[2748]: E1104 04:58:56.097109 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.097417 kubelet[2748]: E1104 04:58:56.097362 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.097417 kubelet[2748]: W1104 04:58:56.097379 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.097693 kubelet[2748]: E1104 04:58:56.097594 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.097942 kubelet[2748]: E1104 04:58:56.097904 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.097942 kubelet[2748]: W1104 04:58:56.097922 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.098238 kubelet[2748]: E1104 04:58:56.098180 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.098518 kubelet[2748]: E1104 04:58:56.098416 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.098518 kubelet[2748]: W1104 04:58:56.098433 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.098518 kubelet[2748]: E1104 04:58:56.098457 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.099064 kubelet[2748]: E1104 04:58:56.098892 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.099064 kubelet[2748]: W1104 04:58:56.098910 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.099064 kubelet[2748]: E1104 04:58:56.098945 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.099546 kubelet[2748]: E1104 04:58:56.099532 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.099972 kubelet[2748]: W1104 04:58:56.099955 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.100118 kubelet[2748]: E1104 04:58:56.100056 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.100224 kubelet[2748]: E1104 04:58:56.100213 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.100358 kubelet[2748]: W1104 04:58:56.100268 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.100358 kubelet[2748]: E1104 04:58:56.100283 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.100710 kubelet[2748]: E1104 04:58:56.100570 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.100710 kubelet[2748]: W1104 04:58:56.100581 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.100710 kubelet[2748]: E1104 04:58:56.100592 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.100869 kubelet[2748]: E1104 04:58:56.100859 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.100917 kubelet[2748]: W1104 04:58:56.100909 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.100975 kubelet[2748]: E1104 04:58:56.100967 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.101653 kubelet[2748]: E1104 04:58:56.101534 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.101653 kubelet[2748]: W1104 04:58:56.101548 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.101653 kubelet[2748]: E1104 04:58:56.101562 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.101904 kubelet[2748]: E1104 04:58:56.101893 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:58:56.101990 kubelet[2748]: W1104 04:58:56.101952 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:58:56.101990 kubelet[2748]: E1104 04:58:56.101966 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:58:56.103739 systemd[1]: cri-containerd-3b85b01b5ddfbe9ebfec43d2fc35a7767aa91632689b9ca933595eb6445322bb.scope: Deactivated successfully. Nov 4 04:58:56.130450 containerd[1595]: time="2025-11-04T04:58:56.130210686Z" level=info msg="received exit event container_id:\"3b85b01b5ddfbe9ebfec43d2fc35a7767aa91632689b9ca933595eb6445322bb\" id:\"3b85b01b5ddfbe9ebfec43d2fc35a7767aa91632689b9ca933595eb6445322bb\" pid:3430 exited_at:{seconds:1762232336 nanos:108228926}" Nov 4 04:58:56.170171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b85b01b5ddfbe9ebfec43d2fc35a7767aa91632689b9ca933595eb6445322bb-rootfs.mount: Deactivated successfully. Nov 4 04:58:57.049679 kubelet[2748]: E1104 04:58:57.049609 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:58:57.051976 containerd[1595]: time="2025-11-04T04:58:57.051879454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 4 04:58:57.077294 kubelet[2748]: I1104 04:58:57.077092 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d46b79799-fxwg2" podStartSLOduration=3.8506898769999998 podStartE2EDuration="6.077069512s" podCreationTimestamp="2025-11-04 04:58:51 +0000 UTC" firstStartedPulling="2025-11-04 04:58:52.372727795 +0000 UTC m=+22.665715058" lastFinishedPulling="2025-11-04 04:58:54.599107418 +0000 UTC m=+24.892094693" observedRunningTime="2025-11-04 04:58:55.071540427 +0000 UTC m=+25.364527712" watchObservedRunningTime="2025-11-04 04:58:57.077069512 +0000 UTC m=+27.370056819" Nov 4 04:58:57.899845 kubelet[2748]: E1104 04:58:57.899793 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vv5n9" podUID="092f500e-4822-4935-b64c-fa41aafe316d" Nov 4 04:58:59.834954 containerd[1595]: time="2025-11-04T04:58:59.834782774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:59.836843 containerd[1595]: time="2025-11-04T04:58:59.836782785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Nov 4 04:58:59.837512 containerd[1595]: time="2025-11-04T04:58:59.837274523Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:59.838917 containerd[1595]: time="2025-11-04T04:58:59.838890101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:58:59.839729 containerd[1595]: time="2025-11-04T04:58:59.839702092Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.787772484s" Nov 4 04:58:59.839856 containerd[1595]: time="2025-11-04T04:58:59.839839770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 4 04:58:59.844214 containerd[1595]: time="2025-11-04T04:58:59.844179842Z" level=info msg="CreateContainer within sandbox \"7cbec958e4630568ad3fd6259ce9bfb0fb940e23a311d83c2d6a4e0fd97e7f21\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 4 04:58:59.858418 containerd[1595]: time="2025-11-04T04:58:59.858241270Z" level=info msg="Container 8f204d7507b1f0965812c418e440feb029bc1dc9ad39b40baa8748284756c7c1: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:58:59.863312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2096996917.mount: Deactivated successfully. Nov 4 04:58:59.885075 containerd[1595]: time="2025-11-04T04:58:59.885002010Z" level=info msg="CreateContainer within sandbox \"7cbec958e4630568ad3fd6259ce9bfb0fb940e23a311d83c2d6a4e0fd97e7f21\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8f204d7507b1f0965812c418e440feb029bc1dc9ad39b40baa8748284756c7c1\"" Nov 4 04:58:59.886068 containerd[1595]: time="2025-11-04T04:58:59.885797633Z" level=info msg="StartContainer for \"8f204d7507b1f0965812c418e440feb029bc1dc9ad39b40baa8748284756c7c1\"" Nov 4 04:58:59.887800 containerd[1595]: time="2025-11-04T04:58:59.887707870Z" level=info msg="connecting to shim 8f204d7507b1f0965812c418e440feb029bc1dc9ad39b40baa8748284756c7c1" address="unix:///run/containerd/s/b0c023cef52d0cd886ae06a20b2dbd5d82ebe7b93fc7d7e6d40f52c044588d5f" protocol=ttrpc version=3 Nov 4 04:58:59.899135 kubelet[2748]: E1104 04:58:59.898744 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vv5n9" podUID="092f500e-4822-4935-b64c-fa41aafe316d" Nov 4 04:58:59.924704 systemd[1]: Started cri-containerd-8f204d7507b1f0965812c418e440feb029bc1dc9ad39b40baa8748284756c7c1.scope - libcontainer container 8f204d7507b1f0965812c418e440feb029bc1dc9ad39b40baa8748284756c7c1. Nov 4 04:58:59.975843 containerd[1595]: time="2025-11-04T04:58:59.975796486Z" level=info msg="StartContainer for \"8f204d7507b1f0965812c418e440feb029bc1dc9ad39b40baa8748284756c7c1\" returns successfully" Nov 4 04:59:00.080127 kubelet[2748]: E1104 04:59:00.079617 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:00.724344 systemd[1]: cri-containerd-8f204d7507b1f0965812c418e440feb029bc1dc9ad39b40baa8748284756c7c1.scope: Deactivated successfully. Nov 4 04:59:00.725524 systemd[1]: cri-containerd-8f204d7507b1f0965812c418e440feb029bc1dc9ad39b40baa8748284756c7c1.scope: Consumed 645ms CPU time, 167.7M memory peak, 13.4M read from disk, 171.3M written to disk. Nov 4 04:59:00.740478 containerd[1595]: time="2025-11-04T04:59:00.739808719Z" level=info msg="received exit event container_id:\"8f204d7507b1f0965812c418e440feb029bc1dc9ad39b40baa8748284756c7c1\" id:\"8f204d7507b1f0965812c418e440feb029bc1dc9ad39b40baa8748284756c7c1\" pid:3522 exited_at:{seconds:1762232340 nanos:726855845}" Nov 4 04:59:00.803230 kubelet[2748]: I1104 04:59:00.802430 2748 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 4 04:59:00.913201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f204d7507b1f0965812c418e440feb029bc1dc9ad39b40baa8748284756c7c1-rootfs.mount: Deactivated successfully. Nov 4 04:59:00.937054 systemd[1]: Created slice kubepods-burstable-pod9022ded0_c2da_40c2_8e1c_cd8e1a1c5390.slice - libcontainer container kubepods-burstable-pod9022ded0_c2da_40c2_8e1c_cd8e1a1c5390.slice. Nov 4 04:59:00.977851 systemd[1]: Created slice kubepods-besteffort-pod58ac8885_f887_4233_b0b8_becfde233cd2.slice - libcontainer container kubepods-besteffort-pod58ac8885_f887_4233_b0b8_becfde233cd2.slice. Nov 4 04:59:00.986071 systemd[1]: Created slice kubepods-burstable-poda7b4ab04_49be_48a0_9728_5a995e7ce19d.slice - libcontainer container kubepods-burstable-poda7b4ab04_49be_48a0_9728_5a995e7ce19d.slice. Nov 4 04:59:01.005015 systemd[1]: Created slice kubepods-besteffort-pod222bf072_72e8_4f95_b557_9dabd6a2bea1.slice - libcontainer container kubepods-besteffort-pod222bf072_72e8_4f95_b557_9dabd6a2bea1.slice. Nov 4 04:59:01.019250 systemd[1]: Created slice kubepods-besteffort-podc80a2d93_8040_43fb_ae27_fda397ce6d05.slice - libcontainer container kubepods-besteffort-podc80a2d93_8040_43fb_ae27_fda397ce6d05.slice. Nov 4 04:59:01.034173 systemd[1]: Created slice kubepods-besteffort-poda786dc4c_0c1b_411d_9e1c_798267553660.slice - libcontainer container kubepods-besteffort-poda786dc4c_0c1b_411d_9e1c_798267553660.slice. Nov 4 04:59:01.045743 systemd[1]: Created slice kubepods-besteffort-poddfabd4a9_e6b0_4f3f_aab6_4d707177593b.slice - libcontainer container kubepods-besteffort-poddfabd4a9_e6b0_4f3f_aab6_4d707177593b.slice. Nov 4 04:59:01.054305 kubelet[2748]: I1104 04:59:01.054250 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a786dc4c-0c1b-411d-9e1c-798267553660-goldmane-ca-bundle\") pod \"goldmane-666569f655-psj7t\" (UID: \"a786dc4c-0c1b-411d-9e1c-798267553660\") " pod="calico-system/goldmane-666569f655-psj7t" Nov 4 04:59:01.054305 kubelet[2748]: I1104 04:59:01.054304 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9022ded0-c2da-40c2-8e1c-cd8e1a1c5390-config-volume\") pod \"coredns-668d6bf9bc-6qrxf\" (UID: \"9022ded0-c2da-40c2-8e1c-cd8e1a1c5390\") " pod="kube-system/coredns-668d6bf9bc-6qrxf" Nov 4 04:59:01.055698 kubelet[2748]: I1104 04:59:01.054325 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88gkx\" (UniqueName: \"kubernetes.io/projected/9022ded0-c2da-40c2-8e1c-cd8e1a1c5390-kube-api-access-88gkx\") pod \"coredns-668d6bf9bc-6qrxf\" (UID: \"9022ded0-c2da-40c2-8e1c-cd8e1a1c5390\") " pod="kube-system/coredns-668d6bf9bc-6qrxf" Nov 4 04:59:01.055698 kubelet[2748]: I1104 04:59:01.054344 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24rgm\" (UniqueName: \"kubernetes.io/projected/c80a2d93-8040-43fb-ae27-fda397ce6d05-kube-api-access-24rgm\") pod \"calico-apiserver-64fff7f795-8w5t9\" (UID: \"c80a2d93-8040-43fb-ae27-fda397ce6d05\") " pod="calico-apiserver/calico-apiserver-64fff7f795-8w5t9" Nov 4 04:59:01.055698 kubelet[2748]: I1104 04:59:01.054370 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hgb4\" (UniqueName: \"kubernetes.io/projected/222bf072-72e8-4f95-b557-9dabd6a2bea1-kube-api-access-2hgb4\") pod \"calico-kube-controllers-b455dd7c6-8tlqq\" (UID: \"222bf072-72e8-4f95-b557-9dabd6a2bea1\") " pod="calico-system/calico-kube-controllers-b455dd7c6-8tlqq" Nov 4 04:59:01.055698 kubelet[2748]: I1104 04:59:01.054408 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plp4c\" (UniqueName: \"kubernetes.io/projected/a786dc4c-0c1b-411d-9e1c-798267553660-kube-api-access-plp4c\") pod \"goldmane-666569f655-psj7t\" (UID: \"a786dc4c-0c1b-411d-9e1c-798267553660\") " pod="calico-system/goldmane-666569f655-psj7t" Nov 4 04:59:01.055698 kubelet[2748]: I1104 04:59:01.054427 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/222bf072-72e8-4f95-b557-9dabd6a2bea1-tigera-ca-bundle\") pod \"calico-kube-controllers-b455dd7c6-8tlqq\" (UID: \"222bf072-72e8-4f95-b557-9dabd6a2bea1\") " pod="calico-system/calico-kube-controllers-b455dd7c6-8tlqq" Nov 4 04:59:01.055842 kubelet[2748]: I1104 04:59:01.054443 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfabd4a9-e6b0-4f3f-aab6-4d707177593b-whisker-ca-bundle\") pod \"whisker-7d56b5f447-8pgbp\" (UID: \"dfabd4a9-e6b0-4f3f-aab6-4d707177593b\") " pod="calico-system/whisker-7d56b5f447-8pgbp" Nov 4 04:59:01.055842 kubelet[2748]: I1104 04:59:01.054465 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/58ac8885-f887-4233-b0b8-becfde233cd2-calico-apiserver-certs\") pod \"calico-apiserver-64fff7f795-2rvd7\" (UID: \"58ac8885-f887-4233-b0b8-becfde233cd2\") " pod="calico-apiserver/calico-apiserver-64fff7f795-2rvd7" Nov 4 04:59:01.055842 kubelet[2748]: I1104 04:59:01.054585 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7b4ab04-49be-48a0-9728-5a995e7ce19d-config-volume\") pod \"coredns-668d6bf9bc-xsjpm\" (UID: \"a7b4ab04-49be-48a0-9728-5a995e7ce19d\") " pod="kube-system/coredns-668d6bf9bc-xsjpm" Nov 4 04:59:01.055842 kubelet[2748]: I1104 04:59:01.054607 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c80a2d93-8040-43fb-ae27-fda397ce6d05-calico-apiserver-certs\") pod \"calico-apiserver-64fff7f795-8w5t9\" (UID: \"c80a2d93-8040-43fb-ae27-fda397ce6d05\") " pod="calico-apiserver/calico-apiserver-64fff7f795-8w5t9" Nov 4 04:59:01.055842 kubelet[2748]: I1104 04:59:01.054653 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6q9p\" (UniqueName: \"kubernetes.io/projected/dfabd4a9-e6b0-4f3f-aab6-4d707177593b-kube-api-access-p6q9p\") pod \"whisker-7d56b5f447-8pgbp\" (UID: \"dfabd4a9-e6b0-4f3f-aab6-4d707177593b\") " pod="calico-system/whisker-7d56b5f447-8pgbp" Nov 4 04:59:01.055971 kubelet[2748]: I1104 04:59:01.054676 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4njh\" (UniqueName: \"kubernetes.io/projected/a7b4ab04-49be-48a0-9728-5a995e7ce19d-kube-api-access-g4njh\") pod \"coredns-668d6bf9bc-xsjpm\" (UID: \"a7b4ab04-49be-48a0-9728-5a995e7ce19d\") " pod="kube-system/coredns-668d6bf9bc-xsjpm" Nov 4 04:59:01.055971 kubelet[2748]: I1104 04:59:01.054698 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a786dc4c-0c1b-411d-9e1c-798267553660-config\") pod \"goldmane-666569f655-psj7t\" (UID: \"a786dc4c-0c1b-411d-9e1c-798267553660\") " pod="calico-system/goldmane-666569f655-psj7t" Nov 4 04:59:01.055971 kubelet[2748]: I1104 04:59:01.054724 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dfabd4a9-e6b0-4f3f-aab6-4d707177593b-whisker-backend-key-pair\") pod \"whisker-7d56b5f447-8pgbp\" (UID: \"dfabd4a9-e6b0-4f3f-aab6-4d707177593b\") " pod="calico-system/whisker-7d56b5f447-8pgbp" Nov 4 04:59:01.055971 kubelet[2748]: I1104 04:59:01.054748 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a786dc4c-0c1b-411d-9e1c-798267553660-goldmane-key-pair\") pod \"goldmane-666569f655-psj7t\" (UID: \"a786dc4c-0c1b-411d-9e1c-798267553660\") " pod="calico-system/goldmane-666569f655-psj7t" Nov 4 04:59:01.055971 kubelet[2748]: I1104 04:59:01.054766 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9xrk\" (UniqueName: \"kubernetes.io/projected/58ac8885-f887-4233-b0b8-becfde233cd2-kube-api-access-x9xrk\") pod \"calico-apiserver-64fff7f795-2rvd7\" (UID: \"58ac8885-f887-4233-b0b8-becfde233cd2\") " pod="calico-apiserver/calico-apiserver-64fff7f795-2rvd7" Nov 4 04:59:01.102229 kubelet[2748]: E1104 04:59:01.102181 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:01.108171 containerd[1595]: time="2025-11-04T04:59:01.107050244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 4 04:59:01.273066 kubelet[2748]: E1104 04:59:01.272180 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:01.273912 containerd[1595]: time="2025-11-04T04:59:01.273876359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6qrxf,Uid:9022ded0-c2da-40c2-8e1c-cd8e1a1c5390,Namespace:kube-system,Attempt:0,}" Nov 4 04:59:01.292483 kubelet[2748]: E1104 04:59:01.292440 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:01.296480 containerd[1595]: time="2025-11-04T04:59:01.294367666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xsjpm,Uid:a7b4ab04-49be-48a0-9728-5a995e7ce19d,Namespace:kube-system,Attempt:0,}" Nov 4 04:59:01.299089 containerd[1595]: time="2025-11-04T04:59:01.298571460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fff7f795-2rvd7,Uid:58ac8885-f887-4233-b0b8-becfde233cd2,Namespace:calico-apiserver,Attempt:0,}" Nov 4 04:59:01.330347 containerd[1595]: time="2025-11-04T04:59:01.330307251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b455dd7c6-8tlqq,Uid:222bf072-72e8-4f95-b557-9dabd6a2bea1,Namespace:calico-system,Attempt:0,}" Nov 4 04:59:01.332207 containerd[1595]: time="2025-11-04T04:59:01.331131485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fff7f795-8w5t9,Uid:c80a2d93-8040-43fb-ae27-fda397ce6d05,Namespace:calico-apiserver,Attempt:0,}" Nov 4 04:59:01.344587 containerd[1595]: time="2025-11-04T04:59:01.344188238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-psj7t,Uid:a786dc4c-0c1b-411d-9e1c-798267553660,Namespace:calico-system,Attempt:0,}" Nov 4 04:59:01.357477 containerd[1595]: time="2025-11-04T04:59:01.357320903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d56b5f447-8pgbp,Uid:dfabd4a9-e6b0-4f3f-aab6-4d707177593b,Namespace:calico-system,Attempt:0,}" Nov 4 04:59:01.682797 containerd[1595]: time="2025-11-04T04:59:01.682617652Z" level=error msg="Failed to destroy network for sandbox \"552e578119137155871802ec239e3d5d7fc4027e512c02bcb2e51f96e7f966cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.702873 containerd[1595]: time="2025-11-04T04:59:01.701620288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d56b5f447-8pgbp,Uid:dfabd4a9-e6b0-4f3f-aab6-4d707177593b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"552e578119137155871802ec239e3d5d7fc4027e512c02bcb2e51f96e7f966cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.703644 containerd[1595]: time="2025-11-04T04:59:01.703506036Z" level=error msg="Failed to destroy network for sandbox \"9a876254bfe2bd0f1aa427e46657caed14b8f9ace744d192c3e11ba4dee41930\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.706945 containerd[1595]: time="2025-11-04T04:59:01.706837540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b455dd7c6-8tlqq,Uid:222bf072-72e8-4f95-b557-9dabd6a2bea1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a876254bfe2bd0f1aa427e46657caed14b8f9ace744d192c3e11ba4dee41930\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.719606 kubelet[2748]: E1104 04:59:01.719466 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"552e578119137155871802ec239e3d5d7fc4027e512c02bcb2e51f96e7f966cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.719870 kubelet[2748]: E1104 04:59:01.719613 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"552e578119137155871802ec239e3d5d7fc4027e512c02bcb2e51f96e7f966cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d56b5f447-8pgbp" Nov 4 04:59:01.719870 kubelet[2748]: E1104 04:59:01.719653 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"552e578119137155871802ec239e3d5d7fc4027e512c02bcb2e51f96e7f966cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d56b5f447-8pgbp" Nov 4 04:59:01.735825 kubelet[2748]: E1104 04:59:01.735543 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d56b5f447-8pgbp_calico-system(dfabd4a9-e6b0-4f3f-aab6-4d707177593b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d56b5f447-8pgbp_calico-system(dfabd4a9-e6b0-4f3f-aab6-4d707177593b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"552e578119137155871802ec239e3d5d7fc4027e512c02bcb2e51f96e7f966cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d56b5f447-8pgbp" podUID="dfabd4a9-e6b0-4f3f-aab6-4d707177593b" Nov 4 04:59:01.748899 kubelet[2748]: E1104 04:59:01.748106 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a876254bfe2bd0f1aa427e46657caed14b8f9ace744d192c3e11ba4dee41930\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.748899 kubelet[2748]: E1104 04:59:01.748204 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a876254bfe2bd0f1aa427e46657caed14b8f9ace744d192c3e11ba4dee41930\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b455dd7c6-8tlqq" Nov 4 04:59:01.748899 kubelet[2748]: E1104 04:59:01.748237 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a876254bfe2bd0f1aa427e46657caed14b8f9ace744d192c3e11ba4dee41930\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b455dd7c6-8tlqq" Nov 4 04:59:01.749417 kubelet[2748]: E1104 04:59:01.748326 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b455dd7c6-8tlqq_calico-system(222bf072-72e8-4f95-b557-9dabd6a2bea1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b455dd7c6-8tlqq_calico-system(222bf072-72e8-4f95-b557-9dabd6a2bea1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a876254bfe2bd0f1aa427e46657caed14b8f9ace744d192c3e11ba4dee41930\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b455dd7c6-8tlqq" podUID="222bf072-72e8-4f95-b557-9dabd6a2bea1" Nov 4 04:59:01.761786 containerd[1595]: time="2025-11-04T04:59:01.761546362Z" level=error msg="Failed to destroy network for sandbox \"c3442f9b3b4ff06a7f15990395911bef452a6d39820392143aecf985b94ee9ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.766334 containerd[1595]: time="2025-11-04T04:59:01.765863917Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6qrxf,Uid:9022ded0-c2da-40c2-8e1c-cd8e1a1c5390,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3442f9b3b4ff06a7f15990395911bef452a6d39820392143aecf985b94ee9ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.768015 kubelet[2748]: E1104 04:59:01.767774 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3442f9b3b4ff06a7f15990395911bef452a6d39820392143aecf985b94ee9ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.769581 kubelet[2748]: E1104 04:59:01.768434 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3442f9b3b4ff06a7f15990395911bef452a6d39820392143aecf985b94ee9ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6qrxf" Nov 4 04:59:01.769581 kubelet[2748]: E1104 04:59:01.769216 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3442f9b3b4ff06a7f15990395911bef452a6d39820392143aecf985b94ee9ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6qrxf" Nov 4 04:59:01.770388 kubelet[2748]: E1104 04:59:01.770002 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6qrxf_kube-system(9022ded0-c2da-40c2-8e1c-cd8e1a1c5390)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6qrxf_kube-system(9022ded0-c2da-40c2-8e1c-cd8e1a1c5390)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3442f9b3b4ff06a7f15990395911bef452a6d39820392143aecf985b94ee9ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6qrxf" podUID="9022ded0-c2da-40c2-8e1c-cd8e1a1c5390" Nov 4 04:59:01.801759 containerd[1595]: time="2025-11-04T04:59:01.801651615Z" level=error msg="Failed to destroy network for sandbox \"5d0d7862f669610dca193c8529fde9e68cd5e59312e1f9c703ae746a952a9795\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.809078 containerd[1595]: time="2025-11-04T04:59:01.809004440Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fff7f795-8w5t9,Uid:c80a2d93-8040-43fb-ae27-fda397ce6d05,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d0d7862f669610dca193c8529fde9e68cd5e59312e1f9c703ae746a952a9795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.810789 kubelet[2748]: E1104 04:59:01.810746 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d0d7862f669610dca193c8529fde9e68cd5e59312e1f9c703ae746a952a9795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.811297 kubelet[2748]: E1104 04:59:01.811002 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d0d7862f669610dca193c8529fde9e68cd5e59312e1f9c703ae746a952a9795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64fff7f795-8w5t9" Nov 4 04:59:01.812439 kubelet[2748]: E1104 04:59:01.811457 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d0d7862f669610dca193c8529fde9e68cd5e59312e1f9c703ae746a952a9795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64fff7f795-8w5t9" Nov 4 04:59:01.812439 kubelet[2748]: E1104 04:59:01.811535 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64fff7f795-8w5t9_calico-apiserver(c80a2d93-8040-43fb-ae27-fda397ce6d05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64fff7f795-8w5t9_calico-apiserver(c80a2d93-8040-43fb-ae27-fda397ce6d05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d0d7862f669610dca193c8529fde9e68cd5e59312e1f9c703ae746a952a9795\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64fff7f795-8w5t9" podUID="c80a2d93-8040-43fb-ae27-fda397ce6d05" Nov 4 04:59:01.814882 containerd[1595]: time="2025-11-04T04:59:01.813619613Z" level=error msg="Failed to destroy network for sandbox \"25fcf34c191b36ca98daf6aec85082cd23f71f44b2475eab0b62ab139c266e3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.819987 containerd[1595]: time="2025-11-04T04:59:01.819925283Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-psj7t,Uid:a786dc4c-0c1b-411d-9e1c-798267553660,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"25fcf34c191b36ca98daf6aec85082cd23f71f44b2475eab0b62ab139c266e3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.820709 kubelet[2748]: E1104 04:59:01.820661 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25fcf34c191b36ca98daf6aec85082cd23f71f44b2475eab0b62ab139c266e3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.820953 kubelet[2748]: E1104 04:59:01.820918 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25fcf34c191b36ca98daf6aec85082cd23f71f44b2475eab0b62ab139c266e3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-psj7t" Nov 4 04:59:01.821096 kubelet[2748]: E1104 04:59:01.821069 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25fcf34c191b36ca98daf6aec85082cd23f71f44b2475eab0b62ab139c266e3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-psj7t" Nov 4 04:59:01.821322 kubelet[2748]: E1104 04:59:01.821288 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-psj7t_calico-system(a786dc4c-0c1b-411d-9e1c-798267553660)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-psj7t_calico-system(a786dc4c-0c1b-411d-9e1c-798267553660)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25fcf34c191b36ca98daf6aec85082cd23f71f44b2475eab0b62ab139c266e3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-psj7t" podUID="a786dc4c-0c1b-411d-9e1c-798267553660" Nov 4 04:59:01.825090 containerd[1595]: time="2025-11-04T04:59:01.825020545Z" level=error msg="Failed to destroy network for sandbox \"4fdf589f41b356e37ed991f0169bd7f004ebd941db15ee546750a32e650d1cf3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.831224 containerd[1595]: time="2025-11-04T04:59:01.831031869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xsjpm,Uid:a7b4ab04-49be-48a0-9728-5a995e7ce19d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fdf589f41b356e37ed991f0169bd7f004ebd941db15ee546750a32e650d1cf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.832095 kubelet[2748]: E1104 04:59:01.831628 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fdf589f41b356e37ed991f0169bd7f004ebd941db15ee546750a32e650d1cf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.832095 kubelet[2748]: E1104 04:59:01.831688 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fdf589f41b356e37ed991f0169bd7f004ebd941db15ee546750a32e650d1cf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xsjpm" Nov 4 04:59:01.832095 kubelet[2748]: E1104 04:59:01.831711 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fdf589f41b356e37ed991f0169bd7f004ebd941db15ee546750a32e650d1cf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xsjpm" Nov 4 04:59:01.833114 kubelet[2748]: E1104 04:59:01.831771 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xsjpm_kube-system(a7b4ab04-49be-48a0-9728-5a995e7ce19d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xsjpm_kube-system(a7b4ab04-49be-48a0-9728-5a995e7ce19d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4fdf589f41b356e37ed991f0169bd7f004ebd941db15ee546750a32e650d1cf3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xsjpm" podUID="a7b4ab04-49be-48a0-9728-5a995e7ce19d" Nov 4 04:59:01.837831 containerd[1595]: time="2025-11-04T04:59:01.837757870Z" level=error msg="Failed to destroy network for sandbox \"6d472a9b512dbd0c57f74f5b215219d8aeb471737415b4c79d786c00a4343360\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.840293 containerd[1595]: time="2025-11-04T04:59:01.840232084Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fff7f795-2rvd7,Uid:58ac8885-f887-4233-b0b8-becfde233cd2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d472a9b512dbd0c57f74f5b215219d8aeb471737415b4c79d786c00a4343360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.840719 kubelet[2748]: E1104 04:59:01.840675 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d472a9b512dbd0c57f74f5b215219d8aeb471737415b4c79d786c00a4343360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:01.840856 kubelet[2748]: E1104 04:59:01.840749 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d472a9b512dbd0c57f74f5b215219d8aeb471737415b4c79d786c00a4343360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64fff7f795-2rvd7" Nov 4 04:59:01.840856 kubelet[2748]: E1104 04:59:01.840774 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d472a9b512dbd0c57f74f5b215219d8aeb471737415b4c79d786c00a4343360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64fff7f795-2rvd7" Nov 4 04:59:01.840856 kubelet[2748]: E1104 04:59:01.840834 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64fff7f795-2rvd7_calico-apiserver(58ac8885-f887-4233-b0b8-becfde233cd2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64fff7f795-2rvd7_calico-apiserver(58ac8885-f887-4233-b0b8-becfde233cd2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d472a9b512dbd0c57f74f5b215219d8aeb471737415b4c79d786c00a4343360\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64fff7f795-2rvd7" podUID="58ac8885-f887-4233-b0b8-becfde233cd2" Nov 4 04:59:01.946027 systemd[1]: Created slice kubepods-besteffort-pod092f500e_4822_4935_b64c_fa41aafe316d.slice - libcontainer container kubepods-besteffort-pod092f500e_4822_4935_b64c_fa41aafe316d.slice. Nov 4 04:59:01.956452 containerd[1595]: time="2025-11-04T04:59:01.956335770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vv5n9,Uid:092f500e-4822-4935-b64c-fa41aafe316d,Namespace:calico-system,Attempt:0,}" Nov 4 04:59:02.057271 containerd[1595]: time="2025-11-04T04:59:02.057153960Z" level=error msg="Failed to destroy network for sandbox \"d883a4678376c2013b52ff8bc887ee795ee47717d9383b50e4df4ec4dab0b5d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:02.062458 containerd[1595]: time="2025-11-04T04:59:02.062020655Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vv5n9,Uid:092f500e-4822-4935-b64c-fa41aafe316d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d883a4678376c2013b52ff8bc887ee795ee47717d9383b50e4df4ec4dab0b5d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:02.063449 kubelet[2748]: E1104 04:59:02.063363 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d883a4678376c2013b52ff8bc887ee795ee47717d9383b50e4df4ec4dab0b5d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:59:02.064979 kubelet[2748]: E1104 04:59:02.063473 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d883a4678376c2013b52ff8bc887ee795ee47717d9383b50e4df4ec4dab0b5d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vv5n9" Nov 4 04:59:02.064979 kubelet[2748]: E1104 04:59:02.063506 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d883a4678376c2013b52ff8bc887ee795ee47717d9383b50e4df4ec4dab0b5d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vv5n9" Nov 4 04:59:02.064979 kubelet[2748]: E1104 04:59:02.063566 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vv5n9_calico-system(092f500e-4822-4935-b64c-fa41aafe316d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vv5n9_calico-system(092f500e-4822-4935-b64c-fa41aafe316d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d883a4678376c2013b52ff8bc887ee795ee47717d9383b50e4df4ec4dab0b5d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vv5n9" podUID="092f500e-4822-4935-b64c-fa41aafe316d" Nov 4 04:59:02.064386 systemd[1]: run-netns-cni\x2d8ccae51a\x2d1dc3\x2d470d\x2d51fe\x2d20cca4294265.mount: Deactivated successfully. Nov 4 04:59:06.489416 kubelet[2748]: I1104 04:59:06.489218 2748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 04:59:06.499916 kubelet[2748]: E1104 04:59:06.499864 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:07.123488 kubelet[2748]: E1104 04:59:07.123434 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:08.009122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1027685419.mount: Deactivated successfully. Nov 4 04:59:08.120521 containerd[1595]: time="2025-11-04T04:59:08.102558018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:59:08.120521 containerd[1595]: time="2025-11-04T04:59:08.120171713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Nov 4 04:59:08.136378 containerd[1595]: time="2025-11-04T04:59:08.136283080Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:59:08.137950 containerd[1595]: time="2025-11-04T04:59:08.137868952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:59:08.139419 containerd[1595]: time="2025-11-04T04:59:08.138857339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.03168769s" Nov 4 04:59:08.139419 containerd[1595]: time="2025-11-04T04:59:08.138911240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 4 04:59:08.179404 containerd[1595]: time="2025-11-04T04:59:08.179337247Z" level=info msg="CreateContainer within sandbox \"7cbec958e4630568ad3fd6259ce9bfb0fb940e23a311d83c2d6a4e0fd97e7f21\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 4 04:59:08.240980 containerd[1595]: time="2025-11-04T04:59:08.240896468Z" level=info msg="Container 037d5915438357460964175834c977bae7c11f9895b822785b755fa4f9d8036a: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:59:08.244969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2590377527.mount: Deactivated successfully. Nov 4 04:59:08.295954 containerd[1595]: time="2025-11-04T04:59:08.295791783Z" level=info msg="CreateContainer within sandbox \"7cbec958e4630568ad3fd6259ce9bfb0fb940e23a311d83c2d6a4e0fd97e7f21\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"037d5915438357460964175834c977bae7c11f9895b822785b755fa4f9d8036a\"" Nov 4 04:59:08.297058 containerd[1595]: time="2025-11-04T04:59:08.296983639Z" level=info msg="StartContainer for \"037d5915438357460964175834c977bae7c11f9895b822785b755fa4f9d8036a\"" Nov 4 04:59:08.303713 containerd[1595]: time="2025-11-04T04:59:08.302996328Z" level=info msg="connecting to shim 037d5915438357460964175834c977bae7c11f9895b822785b755fa4f9d8036a" address="unix:///run/containerd/s/b0c023cef52d0cd886ae06a20b2dbd5d82ebe7b93fc7d7e6d40f52c044588d5f" protocol=ttrpc version=3 Nov 4 04:59:08.491793 systemd[1]: Started cri-containerd-037d5915438357460964175834c977bae7c11f9895b822785b755fa4f9d8036a.scope - libcontainer container 037d5915438357460964175834c977bae7c11f9895b822785b755fa4f9d8036a. Nov 4 04:59:08.556570 containerd[1595]: time="2025-11-04T04:59:08.556464685Z" level=info msg="StartContainer for \"037d5915438357460964175834c977bae7c11f9895b822785b755fa4f9d8036a\" returns successfully" Nov 4 04:59:08.674828 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 4 04:59:08.676045 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 4 04:59:09.028077 kubelet[2748]: I1104 04:59:09.028025 2748 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfabd4a9-e6b0-4f3f-aab6-4d707177593b-whisker-ca-bundle\") pod \"dfabd4a9-e6b0-4f3f-aab6-4d707177593b\" (UID: \"dfabd4a9-e6b0-4f3f-aab6-4d707177593b\") " Nov 4 04:59:09.028077 kubelet[2748]: I1104 04:59:09.028076 2748 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dfabd4a9-e6b0-4f3f-aab6-4d707177593b-whisker-backend-key-pair\") pod \"dfabd4a9-e6b0-4f3f-aab6-4d707177593b\" (UID: \"dfabd4a9-e6b0-4f3f-aab6-4d707177593b\") " Nov 4 04:59:09.028077 kubelet[2748]: I1104 04:59:09.028104 2748 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6q9p\" (UniqueName: \"kubernetes.io/projected/dfabd4a9-e6b0-4f3f-aab6-4d707177593b-kube-api-access-p6q9p\") pod \"dfabd4a9-e6b0-4f3f-aab6-4d707177593b\" (UID: \"dfabd4a9-e6b0-4f3f-aab6-4d707177593b\") " Nov 4 04:59:09.028936 kubelet[2748]: I1104 04:59:09.028783 2748 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfabd4a9-e6b0-4f3f-aab6-4d707177593b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "dfabd4a9-e6b0-4f3f-aab6-4d707177593b" (UID: "dfabd4a9-e6b0-4f3f-aab6-4d707177593b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 04:59:09.041141 kubelet[2748]: I1104 04:59:09.039698 2748 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfabd4a9-e6b0-4f3f-aab6-4d707177593b-kube-api-access-p6q9p" (OuterVolumeSpecName: "kube-api-access-p6q9p") pod "dfabd4a9-e6b0-4f3f-aab6-4d707177593b" (UID: "dfabd4a9-e6b0-4f3f-aab6-4d707177593b"). InnerVolumeSpecName "kube-api-access-p6q9p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 04:59:09.040269 systemd[1]: var-lib-kubelet-pods-dfabd4a9\x2de6b0\x2d4f3f\x2daab6\x2d4d707177593b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp6q9p.mount: Deactivated successfully. Nov 4 04:59:09.045618 kubelet[2748]: I1104 04:59:09.045529 2748 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfabd4a9-e6b0-4f3f-aab6-4d707177593b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "dfabd4a9-e6b0-4f3f-aab6-4d707177593b" (UID: "dfabd4a9-e6b0-4f3f-aab6-4d707177593b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 04:59:09.047771 systemd[1]: var-lib-kubelet-pods-dfabd4a9\x2de6b0\x2d4f3f\x2daab6\x2d4d707177593b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 4 04:59:09.128683 kubelet[2748]: I1104 04:59:09.128612 2748 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfabd4a9-e6b0-4f3f-aab6-4d707177593b-whisker-ca-bundle\") on node \"ci-4508.0.0-n-4006da48af\" DevicePath \"\"" Nov 4 04:59:09.129061 kubelet[2748]: I1104 04:59:09.128975 2748 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dfabd4a9-e6b0-4f3f-aab6-4d707177593b-whisker-backend-key-pair\") on node \"ci-4508.0.0-n-4006da48af\" DevicePath \"\"" Nov 4 04:59:09.129061 kubelet[2748]: I1104 04:59:09.128993 2748 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p6q9p\" (UniqueName: \"kubernetes.io/projected/dfabd4a9-e6b0-4f3f-aab6-4d707177593b-kube-api-access-p6q9p\") on node \"ci-4508.0.0-n-4006da48af\" DevicePath \"\"" Nov 4 04:59:09.141427 kubelet[2748]: E1104 04:59:09.141200 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:09.146794 systemd[1]: Removed slice kubepods-besteffort-poddfabd4a9_e6b0_4f3f_aab6_4d707177593b.slice - libcontainer container kubepods-besteffort-poddfabd4a9_e6b0_4f3f_aab6_4d707177593b.slice. Nov 4 04:59:09.181298 kubelet[2748]: I1104 04:59:09.181228 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2l56r" podStartSLOduration=2.5725132 podStartE2EDuration="18.18120705s" podCreationTimestamp="2025-11-04 04:58:51 +0000 UTC" firstStartedPulling="2025-11-04 04:58:52.531831479 +0000 UTC m=+22.824818741" lastFinishedPulling="2025-11-04 04:59:08.140525315 +0000 UTC m=+38.433512591" observedRunningTime="2025-11-04 04:59:09.168713658 +0000 UTC m=+39.461700943" watchObservedRunningTime="2025-11-04 04:59:09.18120705 +0000 UTC m=+39.474194378" Nov 4 04:59:09.266638 systemd[1]: Created slice kubepods-besteffort-podd8570cf7_4cce_4759_8eb4_4f57fafd9490.slice - libcontainer container kubepods-besteffort-podd8570cf7_4cce_4759_8eb4_4f57fafd9490.slice. Nov 4 04:59:09.329915 kubelet[2748]: I1104 04:59:09.329772 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8570cf7-4cce-4759-8eb4-4f57fafd9490-whisker-ca-bundle\") pod \"whisker-5f8f9884d7-2qd2d\" (UID: \"d8570cf7-4cce-4759-8eb4-4f57fafd9490\") " pod="calico-system/whisker-5f8f9884d7-2qd2d" Nov 4 04:59:09.329915 kubelet[2748]: I1104 04:59:09.329840 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs49m\" (UniqueName: \"kubernetes.io/projected/d8570cf7-4cce-4759-8eb4-4f57fafd9490-kube-api-access-cs49m\") pod \"whisker-5f8f9884d7-2qd2d\" (UID: \"d8570cf7-4cce-4759-8eb4-4f57fafd9490\") " pod="calico-system/whisker-5f8f9884d7-2qd2d" Nov 4 04:59:09.329915 kubelet[2748]: I1104 04:59:09.329863 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d8570cf7-4cce-4759-8eb4-4f57fafd9490-whisker-backend-key-pair\") pod \"whisker-5f8f9884d7-2qd2d\" (UID: \"d8570cf7-4cce-4759-8eb4-4f57fafd9490\") " pod="calico-system/whisker-5f8f9884d7-2qd2d" Nov 4 04:59:09.573135 containerd[1595]: time="2025-11-04T04:59:09.573021092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f8f9884d7-2qd2d,Uid:d8570cf7-4cce-4759-8eb4-4f57fafd9490,Namespace:calico-system,Attempt:0,}" Nov 4 04:59:09.915176 kubelet[2748]: I1104 04:59:09.915093 2748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfabd4a9-e6b0-4f3f-aab6-4d707177593b" path="/var/lib/kubelet/pods/dfabd4a9-e6b0-4f3f-aab6-4d707177593b/volumes" Nov 4 04:59:09.936513 systemd-networkd[1488]: cali441bb0324ff: Link UP Nov 4 04:59:09.942567 systemd-networkd[1488]: cali441bb0324ff: Gained carrier Nov 4 04:59:09.984694 containerd[1595]: 2025-11-04 04:59:09.603 [INFO][3855] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 04:59:09.984694 containerd[1595]: 2025-11-04 04:59:09.634 [INFO][3855] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4508.0.0--n--4006da48af-k8s-whisker--5f8f9884d7--2qd2d-eth0 whisker-5f8f9884d7- calico-system d8570cf7-4cce-4759-8eb4-4f57fafd9490 909 0 2025-11-04 04:59:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5f8f9884d7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4508.0.0-n-4006da48af whisker-5f8f9884d7-2qd2d eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali441bb0324ff [] [] }} ContainerID="476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832" Namespace="calico-system" Pod="whisker-5f8f9884d7-2qd2d" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-whisker--5f8f9884d7--2qd2d-" Nov 4 04:59:09.984694 containerd[1595]: 2025-11-04 04:59:09.634 [INFO][3855] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832" Namespace="calico-system" Pod="whisker-5f8f9884d7-2qd2d" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-whisker--5f8f9884d7--2qd2d-eth0" Nov 4 04:59:09.984694 containerd[1595]: 2025-11-04 04:59:09.819 [INFO][3866] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832" HandleID="k8s-pod-network.476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832" Workload="ci--4508.0.0--n--4006da48af-k8s-whisker--5f8f9884d7--2qd2d-eth0" Nov 4 04:59:09.984694 containerd[1595]: 2025-11-04 04:59:09.821 [INFO][3866] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832" HandleID="k8s-pod-network.476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832" Workload="ci--4508.0.0--n--4006da48af-k8s-whisker--5f8f9884d7--2qd2d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000380420), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4508.0.0-n-4006da48af", "pod":"whisker-5f8f9884d7-2qd2d", "timestamp":"2025-11-04 04:59:09.819304041 +0000 UTC"}, Hostname:"ci-4508.0.0-n-4006da48af", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:59:09.984694 containerd[1595]: 2025-11-04 04:59:09.821 [INFO][3866] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:59:09.984694 containerd[1595]: 2025-11-04 04:59:09.826 [INFO][3866] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:59:09.984694 containerd[1595]: 2025-11-04 04:59:09.826 [INFO][3866] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4508.0.0-n-4006da48af' Nov 4 04:59:09.984694 containerd[1595]: 2025-11-04 04:59:09.844 [INFO][3866] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:09.984694 containerd[1595]: 2025-11-04 04:59:09.862 [INFO][3866] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:09.984694 containerd[1595]: 2025-11-04 04:59:09.869 [INFO][3866] ipam/ipam.go 511: Trying affinity for 192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:09.984694 containerd[1595]: 2025-11-04 04:59:09.872 [INFO][3866] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:09.984694 containerd[1595]: 2025-11-04 04:59:09.875 [INFO][3866] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:09.984694 containerd[1595]: 2025-11-04 04:59:09.875 [INFO][3866] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:09.984694 containerd[1595]: 2025-11-04 04:59:09.877 [INFO][3866] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832 Nov 4 04:59:09.984694 containerd[1595]: 2025-11-04 04:59:09.882 [INFO][3866] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:09.984694 containerd[1595]: 2025-11-04 04:59:09.900 [INFO][3866] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.65/26] block=192.168.121.64/26 handle="k8s-pod-network.476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:09.984694 containerd[1595]: 2025-11-04 04:59:09.901 [INFO][3866] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.65/26] handle="k8s-pod-network.476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:09.984694 containerd[1595]: 2025-11-04 04:59:09.901 [INFO][3866] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:59:09.984694 containerd[1595]: 2025-11-04 04:59:09.901 [INFO][3866] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.65/26] IPv6=[] ContainerID="476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832" HandleID="k8s-pod-network.476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832" Workload="ci--4508.0.0--n--4006da48af-k8s-whisker--5f8f9884d7--2qd2d-eth0" Nov 4 04:59:09.987983 containerd[1595]: 2025-11-04 04:59:09.911 [INFO][3855] cni-plugin/k8s.go 418: Populated endpoint ContainerID="476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832" Namespace="calico-system" Pod="whisker-5f8f9884d7-2qd2d" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-whisker--5f8f9884d7--2qd2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4508.0.0--n--4006da48af-k8s-whisker--5f8f9884d7--2qd2d-eth0", GenerateName:"whisker-5f8f9884d7-", Namespace:"calico-system", SelfLink:"", UID:"d8570cf7-4cce-4759-8eb4-4f57fafd9490", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 59, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f8f9884d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4508.0.0-n-4006da48af", ContainerID:"", Pod:"whisker-5f8f9884d7-2qd2d", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.121.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali441bb0324ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:09.987983 containerd[1595]: 2025-11-04 04:59:09.911 [INFO][3855] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.65/32] ContainerID="476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832" Namespace="calico-system" Pod="whisker-5f8f9884d7-2qd2d" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-whisker--5f8f9884d7--2qd2d-eth0" Nov 4 04:59:09.987983 containerd[1595]: 2025-11-04 04:59:09.911 [INFO][3855] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali441bb0324ff ContainerID="476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832" Namespace="calico-system" Pod="whisker-5f8f9884d7-2qd2d" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-whisker--5f8f9884d7--2qd2d-eth0" Nov 4 04:59:09.987983 containerd[1595]: 2025-11-04 04:59:09.939 [INFO][3855] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832" Namespace="calico-system" Pod="whisker-5f8f9884d7-2qd2d" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-whisker--5f8f9884d7--2qd2d-eth0" Nov 4 04:59:09.987983 containerd[1595]: 2025-11-04 04:59:09.940 [INFO][3855] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832" Namespace="calico-system" Pod="whisker-5f8f9884d7-2qd2d" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-whisker--5f8f9884d7--2qd2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4508.0.0--n--4006da48af-k8s-whisker--5f8f9884d7--2qd2d-eth0", GenerateName:"whisker-5f8f9884d7-", Namespace:"calico-system", SelfLink:"", UID:"d8570cf7-4cce-4759-8eb4-4f57fafd9490", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 59, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f8f9884d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4508.0.0-n-4006da48af", ContainerID:"476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832", Pod:"whisker-5f8f9884d7-2qd2d", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.121.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali441bb0324ff", MAC:"5e:4a:1f:3c:35:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:09.987983 containerd[1595]: 2025-11-04 04:59:09.979 [INFO][3855] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832" Namespace="calico-system" Pod="whisker-5f8f9884d7-2qd2d" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-whisker--5f8f9884d7--2qd2d-eth0" Nov 4 04:59:10.140916 kubelet[2748]: I1104 04:59:10.140825 2748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 04:59:10.141968 kubelet[2748]: E1104 04:59:10.141911 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:10.350211 containerd[1595]: time="2025-11-04T04:59:10.349639198Z" level=info msg="connecting to shim 476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832" address="unix:///run/containerd/s/b9533d92ea4c19da9447247ee30307339a9134653c3bc510f2268074c5293abb" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:59:10.401760 systemd[1]: Started cri-containerd-476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832.scope - libcontainer container 476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832. Nov 4 04:59:10.541523 containerd[1595]: time="2025-11-04T04:59:10.541262750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f8f9884d7-2qd2d,Uid:d8570cf7-4cce-4759-8eb4-4f57fafd9490,Namespace:calico-system,Attempt:0,} returns sandbox id \"476bad4150edfd761ce0876e64b9cae6c76e7a7306e8491e9a66f3f911de1832\"" Nov 4 04:59:10.551625 containerd[1595]: time="2025-11-04T04:59:10.551381299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 04:59:10.883645 containerd[1595]: time="2025-11-04T04:59:10.883577165Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:10.890824 containerd[1595]: time="2025-11-04T04:59:10.884752100Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 04:59:10.891499 containerd[1595]: time="2025-11-04T04:59:10.884812687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:10.891569 kubelet[2748]: E1104 04:59:10.891092 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:59:10.891569 kubelet[2748]: E1104 04:59:10.891151 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:59:10.891672 kubelet[2748]: E1104 04:59:10.891354 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c2e3b148ce3f482bae29904bdedc5907,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cs49m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f8f9884d7-2qd2d_calico-system(d8570cf7-4cce-4759-8eb4-4f57fafd9490): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:10.895619 containerd[1595]: time="2025-11-04T04:59:10.895062372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 04:59:11.166861 systemd-networkd[1488]: cali441bb0324ff: Gained IPv6LL Nov 4 04:59:11.254309 containerd[1595]: time="2025-11-04T04:59:11.254229973Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:11.255066 containerd[1595]: time="2025-11-04T04:59:11.255024269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 04:59:11.255202 containerd[1595]: time="2025-11-04T04:59:11.255170113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:11.255436 kubelet[2748]: E1104 04:59:11.255381 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:59:11.256540 kubelet[2748]: E1104 04:59:11.255581 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:59:11.256588 kubelet[2748]: E1104 04:59:11.255715 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cs49m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f8f9884d7-2qd2d_calico-system(d8570cf7-4cce-4759-8eb4-4f57fafd9490): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:11.257410 kubelet[2748]: E1104 04:59:11.257160 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f8f9884d7-2qd2d" podUID="d8570cf7-4cce-4759-8eb4-4f57fafd9490" Nov 4 04:59:11.315880 systemd-networkd[1488]: vxlan.calico: Link UP Nov 4 04:59:11.315889 systemd-networkd[1488]: vxlan.calico: Gained carrier Nov 4 04:59:11.899414 kubelet[2748]: E1104 04:59:11.898801 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:11.900009 containerd[1595]: time="2025-11-04T04:59:11.899944535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6qrxf,Uid:9022ded0-c2da-40c2-8e1c-cd8e1a1c5390,Namespace:kube-system,Attempt:0,}" Nov 4 04:59:12.059293 systemd-networkd[1488]: calia64c9661f2e: Link UP Nov 4 04:59:12.060158 systemd-networkd[1488]: calia64c9661f2e: Gained carrier Nov 4 04:59:12.081916 containerd[1595]: 2025-11-04 04:59:11.955 [INFO][4119] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--6qrxf-eth0 coredns-668d6bf9bc- kube-system 9022ded0-c2da-40c2-8e1c-cd8e1a1c5390 818 0 2025-11-04 04:58:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4508.0.0-n-4006da48af coredns-668d6bf9bc-6qrxf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia64c9661f2e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215" Namespace="kube-system" Pod="coredns-668d6bf9bc-6qrxf" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--6qrxf-" Nov 4 04:59:12.081916 containerd[1595]: 2025-11-04 04:59:11.955 [INFO][4119] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215" Namespace="kube-system" Pod="coredns-668d6bf9bc-6qrxf" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--6qrxf-eth0" Nov 4 04:59:12.081916 containerd[1595]: 2025-11-04 04:59:11.999 [INFO][4131] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215" HandleID="k8s-pod-network.9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215" Workload="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--6qrxf-eth0" Nov 4 04:59:12.081916 containerd[1595]: 2025-11-04 04:59:11.999 [INFO][4131] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215" HandleID="k8s-pod-network.9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215" Workload="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--6qrxf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4508.0.0-n-4006da48af", "pod":"coredns-668d6bf9bc-6qrxf", "timestamp":"2025-11-04 04:59:11.999065087 +0000 UTC"}, Hostname:"ci-4508.0.0-n-4006da48af", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:59:12.081916 containerd[1595]: 2025-11-04 04:59:11.999 [INFO][4131] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:59:12.081916 containerd[1595]: 2025-11-04 04:59:11.999 [INFO][4131] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:59:12.081916 containerd[1595]: 2025-11-04 04:59:11.999 [INFO][4131] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4508.0.0-n-4006da48af' Nov 4 04:59:12.081916 containerd[1595]: 2025-11-04 04:59:12.010 [INFO][4131] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:12.081916 containerd[1595]: 2025-11-04 04:59:12.018 [INFO][4131] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:12.081916 containerd[1595]: 2025-11-04 04:59:12.025 [INFO][4131] ipam/ipam.go 511: Trying affinity for 192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:12.081916 containerd[1595]: 2025-11-04 04:59:12.028 [INFO][4131] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:12.081916 containerd[1595]: 2025-11-04 04:59:12.032 [INFO][4131] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:12.081916 containerd[1595]: 2025-11-04 04:59:12.032 [INFO][4131] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:12.081916 containerd[1595]: 2025-11-04 04:59:12.035 [INFO][4131] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215 Nov 4 04:59:12.081916 containerd[1595]: 2025-11-04 04:59:12.041 [INFO][4131] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:12.081916 containerd[1595]: 2025-11-04 04:59:12.049 [INFO][4131] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.66/26] block=192.168.121.64/26 handle="k8s-pod-network.9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:12.081916 containerd[1595]: 2025-11-04 04:59:12.049 [INFO][4131] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.66/26] handle="k8s-pod-network.9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:12.081916 containerd[1595]: 2025-11-04 04:59:12.049 [INFO][4131] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:59:12.081916 containerd[1595]: 2025-11-04 04:59:12.049 [INFO][4131] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.66/26] IPv6=[] ContainerID="9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215" HandleID="k8s-pod-network.9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215" Workload="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--6qrxf-eth0" Nov 4 04:59:12.082682 containerd[1595]: 2025-11-04 04:59:12.054 [INFO][4119] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215" Namespace="kube-system" Pod="coredns-668d6bf9bc-6qrxf" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--6qrxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--6qrxf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9022ded0-c2da-40c2-8e1c-cd8e1a1c5390", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4508.0.0-n-4006da48af", ContainerID:"", Pod:"coredns-668d6bf9bc-6qrxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia64c9661f2e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:12.082682 containerd[1595]: 2025-11-04 04:59:12.054 [INFO][4119] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.66/32] ContainerID="9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215" Namespace="kube-system" Pod="coredns-668d6bf9bc-6qrxf" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--6qrxf-eth0" Nov 4 04:59:12.082682 containerd[1595]: 2025-11-04 04:59:12.054 [INFO][4119] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia64c9661f2e ContainerID="9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215" Namespace="kube-system" Pod="coredns-668d6bf9bc-6qrxf" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--6qrxf-eth0" Nov 4 04:59:12.082682 containerd[1595]: 2025-11-04 04:59:12.060 [INFO][4119] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215" Namespace="kube-system" Pod="coredns-668d6bf9bc-6qrxf" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--6qrxf-eth0" Nov 4 04:59:12.082682 containerd[1595]: 2025-11-04 04:59:12.061 [INFO][4119] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215" Namespace="kube-system" Pod="coredns-668d6bf9bc-6qrxf" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--6qrxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--6qrxf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9022ded0-c2da-40c2-8e1c-cd8e1a1c5390", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4508.0.0-n-4006da48af", ContainerID:"9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215", Pod:"coredns-668d6bf9bc-6qrxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia64c9661f2e", MAC:"fe:f7:ab:d8:ef:39", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:12.086921 containerd[1595]: 2025-11-04 04:59:12.075 [INFO][4119] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215" Namespace="kube-system" Pod="coredns-668d6bf9bc-6qrxf" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--6qrxf-eth0" Nov 4 04:59:12.114566 containerd[1595]: time="2025-11-04T04:59:12.114473060Z" level=info msg="connecting to shim 9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215" address="unix:///run/containerd/s/12b9c11625fa486952f86a4f8eedc700375d52e60e4c78475be5f027c86f56d6" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:59:12.150711 systemd[1]: Started cri-containerd-9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215.scope - libcontainer container 9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215. Nov 4 04:59:12.156319 kubelet[2748]: E1104 04:59:12.156244 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f8f9884d7-2qd2d" podUID="d8570cf7-4cce-4759-8eb4-4f57fafd9490" Nov 4 04:59:12.235336 containerd[1595]: time="2025-11-04T04:59:12.235285489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6qrxf,Uid:9022ded0-c2da-40c2-8e1c-cd8e1a1c5390,Namespace:kube-system,Attempt:0,} returns sandbox id \"9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215\"" Nov 4 04:59:12.236296 kubelet[2748]: E1104 04:59:12.236261 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:12.242702 containerd[1595]: time="2025-11-04T04:59:12.242524497Z" level=info msg="CreateContainer within sandbox \"9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 04:59:12.263348 containerd[1595]: time="2025-11-04T04:59:12.261518221Z" level=info msg="Container a749a587ea43d01fedd8df061ea217cff4f639975e98ebaacd66317b2f6d5d6b: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:59:12.262203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3062385137.mount: Deactivated successfully. Nov 4 04:59:12.277517 containerd[1595]: time="2025-11-04T04:59:12.277450141Z" level=info msg="CreateContainer within sandbox \"9dca2dd5a3bc8706280f997b31f3aa230f19ea7daaa60537044725855b76c215\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a749a587ea43d01fedd8df061ea217cff4f639975e98ebaacd66317b2f6d5d6b\"" Nov 4 04:59:12.279034 containerd[1595]: time="2025-11-04T04:59:12.278966184Z" level=info msg="StartContainer for \"a749a587ea43d01fedd8df061ea217cff4f639975e98ebaacd66317b2f6d5d6b\"" Nov 4 04:59:12.280821 containerd[1595]: time="2025-11-04T04:59:12.280774357Z" level=info msg="connecting to shim a749a587ea43d01fedd8df061ea217cff4f639975e98ebaacd66317b2f6d5d6b" address="unix:///run/containerd/s/12b9c11625fa486952f86a4f8eedc700375d52e60e4c78475be5f027c86f56d6" protocol=ttrpc version=3 Nov 4 04:59:12.306667 systemd[1]: Started cri-containerd-a749a587ea43d01fedd8df061ea217cff4f639975e98ebaacd66317b2f6d5d6b.scope - libcontainer container a749a587ea43d01fedd8df061ea217cff4f639975e98ebaacd66317b2f6d5d6b. Nov 4 04:59:12.355224 containerd[1595]: time="2025-11-04T04:59:12.355178723Z" level=info msg="StartContainer for \"a749a587ea43d01fedd8df061ea217cff4f639975e98ebaacd66317b2f6d5d6b\" returns successfully" Nov 4 04:59:12.375136 systemd-networkd[1488]: vxlan.calico: Gained IPv6LL Nov 4 04:59:12.900086 containerd[1595]: time="2025-11-04T04:59:12.900017449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fff7f795-8w5t9,Uid:c80a2d93-8040-43fb-ae27-fda397ce6d05,Namespace:calico-apiserver,Attempt:0,}" Nov 4 04:59:12.912379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1093308874.mount: Deactivated successfully. Nov 4 04:59:13.042595 systemd-networkd[1488]: cali1b0fffb75bb: Link UP Nov 4 04:59:13.042811 systemd-networkd[1488]: cali1b0fffb75bb: Gained carrier Nov 4 04:59:13.071149 containerd[1595]: 2025-11-04 04:59:12.952 [INFO][4229] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--8w5t9-eth0 calico-apiserver-64fff7f795- calico-apiserver c80a2d93-8040-43fb-ae27-fda397ce6d05 821 0 2025-11-04 04:58:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64fff7f795 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4508.0.0-n-4006da48af calico-apiserver-64fff7f795-8w5t9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1b0fffb75bb [] [] }} ContainerID="70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3" Namespace="calico-apiserver" Pod="calico-apiserver-64fff7f795-8w5t9" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--8w5t9-" Nov 4 04:59:13.071149 containerd[1595]: 2025-11-04 04:59:12.953 [INFO][4229] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3" Namespace="calico-apiserver" Pod="calico-apiserver-64fff7f795-8w5t9" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--8w5t9-eth0" Nov 4 04:59:13.071149 containerd[1595]: 2025-11-04 04:59:12.990 [INFO][4240] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3" HandleID="k8s-pod-network.70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3" Workload="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--8w5t9-eth0" Nov 4 04:59:13.071149 containerd[1595]: 2025-11-04 04:59:12.990 [INFO][4240] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3" HandleID="k8s-pod-network.70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3" Workload="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--8w5t9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb7b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4508.0.0-n-4006da48af", "pod":"calico-apiserver-64fff7f795-8w5t9", "timestamp":"2025-11-04 04:59:12.990072157 +0000 UTC"}, Hostname:"ci-4508.0.0-n-4006da48af", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:59:13.071149 containerd[1595]: 2025-11-04 04:59:12.990 [INFO][4240] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:59:13.071149 containerd[1595]: 2025-11-04 04:59:12.990 [INFO][4240] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:59:13.071149 containerd[1595]: 2025-11-04 04:59:12.990 [INFO][4240] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4508.0.0-n-4006da48af' Nov 4 04:59:13.071149 containerd[1595]: 2025-11-04 04:59:12.998 [INFO][4240] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:13.071149 containerd[1595]: 2025-11-04 04:59:13.005 [INFO][4240] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:13.071149 containerd[1595]: 2025-11-04 04:59:13.012 [INFO][4240] ipam/ipam.go 511: Trying affinity for 192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:13.071149 containerd[1595]: 2025-11-04 04:59:13.016 [INFO][4240] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:13.071149 containerd[1595]: 2025-11-04 04:59:13.019 [INFO][4240] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:13.071149 containerd[1595]: 2025-11-04 04:59:13.019 [INFO][4240] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:13.071149 containerd[1595]: 2025-11-04 04:59:13.021 [INFO][4240] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3 Nov 4 04:59:13.071149 containerd[1595]: 2025-11-04 04:59:13.028 [INFO][4240] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:13.071149 containerd[1595]: 2025-11-04 04:59:13.035 [INFO][4240] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.67/26] block=192.168.121.64/26 handle="k8s-pod-network.70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:13.071149 containerd[1595]: 2025-11-04 04:59:13.035 [INFO][4240] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.67/26] handle="k8s-pod-network.70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:13.071149 containerd[1595]: 2025-11-04 04:59:13.035 [INFO][4240] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:59:13.071149 containerd[1595]: 2025-11-04 04:59:13.035 [INFO][4240] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.67/26] IPv6=[] ContainerID="70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3" HandleID="k8s-pod-network.70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3" Workload="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--8w5t9-eth0" Nov 4 04:59:13.072819 containerd[1595]: 2025-11-04 04:59:13.038 [INFO][4229] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3" Namespace="calico-apiserver" Pod="calico-apiserver-64fff7f795-8w5t9" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--8w5t9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--8w5t9-eth0", GenerateName:"calico-apiserver-64fff7f795-", Namespace:"calico-apiserver", SelfLink:"", UID:"c80a2d93-8040-43fb-ae27-fda397ce6d05", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64fff7f795", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4508.0.0-n-4006da48af", ContainerID:"", Pod:"calico-apiserver-64fff7f795-8w5t9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b0fffb75bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:13.072819 containerd[1595]: 2025-11-04 04:59:13.038 [INFO][4229] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.67/32] ContainerID="70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3" Namespace="calico-apiserver" Pod="calico-apiserver-64fff7f795-8w5t9" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--8w5t9-eth0" Nov 4 04:59:13.072819 containerd[1595]: 2025-11-04 04:59:13.038 [INFO][4229] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b0fffb75bb ContainerID="70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3" Namespace="calico-apiserver" Pod="calico-apiserver-64fff7f795-8w5t9" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--8w5t9-eth0" Nov 4 04:59:13.072819 containerd[1595]: 2025-11-04 04:59:13.044 [INFO][4229] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3" Namespace="calico-apiserver" Pod="calico-apiserver-64fff7f795-8w5t9" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--8w5t9-eth0" Nov 4 04:59:13.072819 containerd[1595]: 2025-11-04 04:59:13.047 [INFO][4229] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3" Namespace="calico-apiserver" Pod="calico-apiserver-64fff7f795-8w5t9" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--8w5t9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--8w5t9-eth0", GenerateName:"calico-apiserver-64fff7f795-", Namespace:"calico-apiserver", SelfLink:"", UID:"c80a2d93-8040-43fb-ae27-fda397ce6d05", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64fff7f795", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4508.0.0-n-4006da48af", ContainerID:"70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3", Pod:"calico-apiserver-64fff7f795-8w5t9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b0fffb75bb", MAC:"a6:82:e0:b7:7c:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:13.072819 containerd[1595]: 2025-11-04 04:59:13.067 [INFO][4229] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3" Namespace="calico-apiserver" Pod="calico-apiserver-64fff7f795-8w5t9" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--8w5t9-eth0" Nov 4 04:59:13.107826 containerd[1595]: time="2025-11-04T04:59:13.106729806Z" level=info msg="connecting to shim 70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3" address="unix:///run/containerd/s/6aa7d5e25917fa57f72973a51be23c915b24c81568852af78811a1886f4ba7ba" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:59:13.168874 kubelet[2748]: E1104 04:59:13.168728 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:13.174521 systemd[1]: Started cri-containerd-70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3.scope - libcontainer container 70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3. Nov 4 04:59:13.213069 kubelet[2748]: I1104 04:59:13.212750 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6qrxf" podStartSLOduration=39.212728426 podStartE2EDuration="39.212728426s" podCreationTimestamp="2025-11-04 04:58:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:59:13.194437942 +0000 UTC m=+43.487425226" watchObservedRunningTime="2025-11-04 04:59:13.212728426 +0000 UTC m=+43.505715707" Nov 4 04:59:13.312983 containerd[1595]: time="2025-11-04T04:59:13.312924409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fff7f795-8w5t9,Uid:c80a2d93-8040-43fb-ae27-fda397ce6d05,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"70f1d9dd489e6c41214e84847e782b55958a87ab6e0ddd69f9fdc3f276eec7b3\"" Nov 4 04:59:13.318285 containerd[1595]: time="2025-11-04T04:59:13.318137152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:59:13.654613 systemd-networkd[1488]: calia64c9661f2e: Gained IPv6LL Nov 4 04:59:13.683125 containerd[1595]: time="2025-11-04T04:59:13.683040505Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:13.684014 containerd[1595]: time="2025-11-04T04:59:13.683962717Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:59:13.684188 containerd[1595]: time="2025-11-04T04:59:13.684066324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:13.684283 kubelet[2748]: E1104 04:59:13.684248 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:13.684640 kubelet[2748]: E1104 04:59:13.684298 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:13.684640 kubelet[2748]: E1104 04:59:13.684449 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24rgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64fff7f795-8w5t9_calico-apiserver(c80a2d93-8040-43fb-ae27-fda397ce6d05): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:13.685902 kubelet[2748]: E1104 04:59:13.685857 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fff7f795-8w5t9" podUID="c80a2d93-8040-43fb-ae27-fda397ce6d05" Nov 4 04:59:13.898872 containerd[1595]: time="2025-11-04T04:59:13.898799909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vv5n9,Uid:092f500e-4822-4935-b64c-fa41aafe316d,Namespace:calico-system,Attempt:0,}" Nov 4 04:59:14.041641 systemd-networkd[1488]: cali92783c4d3c6: Link UP Nov 4 04:59:14.043258 systemd-networkd[1488]: cali92783c4d3c6: Gained carrier Nov 4 04:59:14.067586 containerd[1595]: 2025-11-04 04:59:13.946 [INFO][4308] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4508.0.0--n--4006da48af-k8s-csi--node--driver--vv5n9-eth0 csi-node-driver- calico-system 092f500e-4822-4935-b64c-fa41aafe316d 709 0 2025-11-04 04:58:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4508.0.0-n-4006da48af csi-node-driver-vv5n9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali92783c4d3c6 [] [] }} ContainerID="3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3" Namespace="calico-system" Pod="csi-node-driver-vv5n9" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-csi--node--driver--vv5n9-" Nov 4 04:59:14.067586 containerd[1595]: 2025-11-04 04:59:13.946 [INFO][4308] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3" Namespace="calico-system" Pod="csi-node-driver-vv5n9" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-csi--node--driver--vv5n9-eth0" Nov 4 04:59:14.067586 containerd[1595]: 2025-11-04 04:59:13.979 [INFO][4319] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3" HandleID="k8s-pod-network.3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3" Workload="ci--4508.0.0--n--4006da48af-k8s-csi--node--driver--vv5n9-eth0" Nov 4 04:59:14.067586 containerd[1595]: 2025-11-04 04:59:13.979 [INFO][4319] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3" HandleID="k8s-pod-network.3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3" Workload="ci--4508.0.0--n--4006da48af-k8s-csi--node--driver--vv5n9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4508.0.0-n-4006da48af", "pod":"csi-node-driver-vv5n9", "timestamp":"2025-11-04 04:59:13.979407148 +0000 UTC"}, Hostname:"ci-4508.0.0-n-4006da48af", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:59:14.067586 containerd[1595]: 2025-11-04 04:59:13.979 [INFO][4319] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:59:14.067586 containerd[1595]: 2025-11-04 04:59:13.979 [INFO][4319] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:59:14.067586 containerd[1595]: 2025-11-04 04:59:13.979 [INFO][4319] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4508.0.0-n-4006da48af' Nov 4 04:59:14.067586 containerd[1595]: 2025-11-04 04:59:13.992 [INFO][4319] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:14.067586 containerd[1595]: 2025-11-04 04:59:13.998 [INFO][4319] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:14.067586 containerd[1595]: 2025-11-04 04:59:14.005 [INFO][4319] ipam/ipam.go 511: Trying affinity for 192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:14.067586 containerd[1595]: 2025-11-04 04:59:14.008 [INFO][4319] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:14.067586 containerd[1595]: 2025-11-04 04:59:14.012 [INFO][4319] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:14.067586 containerd[1595]: 2025-11-04 04:59:14.012 [INFO][4319] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:14.067586 containerd[1595]: 2025-11-04 04:59:14.015 [INFO][4319] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3 Nov 4 04:59:14.067586 containerd[1595]: 2025-11-04 04:59:14.023 [INFO][4319] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:14.067586 containerd[1595]: 2025-11-04 04:59:14.031 [INFO][4319] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.68/26] block=192.168.121.64/26 handle="k8s-pod-network.3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:14.067586 containerd[1595]: 2025-11-04 04:59:14.031 [INFO][4319] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.68/26] handle="k8s-pod-network.3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:14.067586 containerd[1595]: 2025-11-04 04:59:14.031 [INFO][4319] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:59:14.067586 containerd[1595]: 2025-11-04 04:59:14.031 [INFO][4319] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.68/26] IPv6=[] ContainerID="3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3" HandleID="k8s-pod-network.3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3" Workload="ci--4508.0.0--n--4006da48af-k8s-csi--node--driver--vv5n9-eth0" Nov 4 04:59:14.069027 containerd[1595]: 2025-11-04 04:59:14.036 [INFO][4308] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3" Namespace="calico-system" Pod="csi-node-driver-vv5n9" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-csi--node--driver--vv5n9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4508.0.0--n--4006da48af-k8s-csi--node--driver--vv5n9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"092f500e-4822-4935-b64c-fa41aafe316d", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4508.0.0-n-4006da48af", ContainerID:"", Pod:"csi-node-driver-vv5n9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92783c4d3c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:14.069027 containerd[1595]: 2025-11-04 04:59:14.036 [INFO][4308] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.68/32] ContainerID="3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3" Namespace="calico-system" Pod="csi-node-driver-vv5n9" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-csi--node--driver--vv5n9-eth0" Nov 4 04:59:14.069027 containerd[1595]: 2025-11-04 04:59:14.036 [INFO][4308] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92783c4d3c6 ContainerID="3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3" Namespace="calico-system" Pod="csi-node-driver-vv5n9" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-csi--node--driver--vv5n9-eth0" Nov 4 04:59:14.069027 containerd[1595]: 2025-11-04 04:59:14.043 [INFO][4308] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3" Namespace="calico-system" Pod="csi-node-driver-vv5n9" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-csi--node--driver--vv5n9-eth0" Nov 4 04:59:14.069027 containerd[1595]: 2025-11-04 04:59:14.044 [INFO][4308] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3" Namespace="calico-system" Pod="csi-node-driver-vv5n9" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-csi--node--driver--vv5n9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4508.0.0--n--4006da48af-k8s-csi--node--driver--vv5n9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"092f500e-4822-4935-b64c-fa41aafe316d", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4508.0.0-n-4006da48af", ContainerID:"3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3", Pod:"csi-node-driver-vv5n9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92783c4d3c6", MAC:"9e:30:22:93:c6:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:14.069027 containerd[1595]: 2025-11-04 04:59:14.056 [INFO][4308] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3" Namespace="calico-system" Pod="csi-node-driver-vv5n9" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-csi--node--driver--vv5n9-eth0" Nov 4 04:59:14.110624 containerd[1595]: time="2025-11-04T04:59:14.110483358Z" level=info msg="connecting to shim 3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3" address="unix:///run/containerd/s/dd1b8d5924f74d73ecc1b27023a9c12a75c61519bcf80a5d0a46e02e1cc559a7" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:59:14.155802 systemd[1]: Started cri-containerd-3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3.scope - libcontainer container 3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3. Nov 4 04:59:14.166599 systemd-networkd[1488]: cali1b0fffb75bb: Gained IPv6LL Nov 4 04:59:14.176878 kubelet[2748]: E1104 04:59:14.176827 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:14.191992 kubelet[2748]: E1104 04:59:14.191614 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fff7f795-8w5t9" podUID="c80a2d93-8040-43fb-ae27-fda397ce6d05" Nov 4 04:59:14.248573 containerd[1595]: time="2025-11-04T04:59:14.248486886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vv5n9,Uid:092f500e-4822-4935-b64c-fa41aafe316d,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e04d6954e551947fbbe559ee6ceae9f18d0bb33cab52f2444226fe061b905e3\"" Nov 4 04:59:14.254541 containerd[1595]: time="2025-11-04T04:59:14.254473280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 04:59:14.551928 containerd[1595]: time="2025-11-04T04:59:14.551862083Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:14.553411 containerd[1595]: time="2025-11-04T04:59:14.553325169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 04:59:14.553693 containerd[1595]: time="2025-11-04T04:59:14.553644785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:14.553956 kubelet[2748]: E1104 04:59:14.553906 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:59:14.554057 kubelet[2748]: E1104 04:59:14.553972 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:59:14.555721 kubelet[2748]: E1104 04:59:14.554133 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptmjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vv5n9_calico-system(092f500e-4822-4935-b64c-fa41aafe316d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:14.558204 containerd[1595]: time="2025-11-04T04:59:14.558143816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 04:59:14.875313 containerd[1595]: time="2025-11-04T04:59:14.875131950Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:14.876030 containerd[1595]: time="2025-11-04T04:59:14.875981174Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 04:59:14.876347 containerd[1595]: time="2025-11-04T04:59:14.876089959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:14.876528 kubelet[2748]: E1104 04:59:14.876455 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:59:14.876807 kubelet[2748]: E1104 04:59:14.876737 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:59:14.877123 kubelet[2748]: E1104 04:59:14.877073 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptmjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vv5n9_calico-system(092f500e-4822-4935-b64c-fa41aafe316d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:14.878532 kubelet[2748]: E1104 04:59:14.878484 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vv5n9" podUID="092f500e-4822-4935-b64c-fa41aafe316d" Nov 4 04:59:14.899359 containerd[1595]: time="2025-11-04T04:59:14.899222334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fff7f795-2rvd7,Uid:58ac8885-f887-4233-b0b8-becfde233cd2,Namespace:calico-apiserver,Attempt:0,}" Nov 4 04:59:15.044377 systemd-networkd[1488]: calie8dc3cf037a: Link UP Nov 4 04:59:15.046179 systemd-networkd[1488]: calie8dc3cf037a: Gained carrier Nov 4 04:59:15.069120 containerd[1595]: 2025-11-04 04:59:14.948 [INFO][4382] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--2rvd7-eth0 calico-apiserver-64fff7f795- calico-apiserver 58ac8885-f887-4233-b0b8-becfde233cd2 819 0 2025-11-04 04:58:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64fff7f795 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4508.0.0-n-4006da48af calico-apiserver-64fff7f795-2rvd7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie8dc3cf037a [] [] }} ContainerID="f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b" Namespace="calico-apiserver" Pod="calico-apiserver-64fff7f795-2rvd7" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--2rvd7-" Nov 4 04:59:15.069120 containerd[1595]: 2025-11-04 04:59:14.948 [INFO][4382] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b" Namespace="calico-apiserver" Pod="calico-apiserver-64fff7f795-2rvd7" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--2rvd7-eth0" Nov 4 04:59:15.069120 containerd[1595]: 2025-11-04 04:59:14.986 [INFO][4394] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b" HandleID="k8s-pod-network.f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b" Workload="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--2rvd7-eth0" Nov 4 04:59:15.069120 containerd[1595]: 2025-11-04 04:59:14.987 [INFO][4394] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b" HandleID="k8s-pod-network.f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b" Workload="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--2rvd7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4508.0.0-n-4006da48af", "pod":"calico-apiserver-64fff7f795-2rvd7", "timestamp":"2025-11-04 04:59:14.986854389 +0000 UTC"}, Hostname:"ci-4508.0.0-n-4006da48af", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:59:15.069120 containerd[1595]: 2025-11-04 04:59:14.987 [INFO][4394] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:59:15.069120 containerd[1595]: 2025-11-04 04:59:14.987 [INFO][4394] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:59:15.069120 containerd[1595]: 2025-11-04 04:59:14.987 [INFO][4394] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4508.0.0-n-4006da48af' Nov 4 04:59:15.069120 containerd[1595]: 2025-11-04 04:59:14.995 [INFO][4394] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:15.069120 containerd[1595]: 2025-11-04 04:59:15.005 [INFO][4394] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:15.069120 containerd[1595]: 2025-11-04 04:59:15.013 [INFO][4394] ipam/ipam.go 511: Trying affinity for 192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:15.069120 containerd[1595]: 2025-11-04 04:59:15.016 [INFO][4394] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:15.069120 containerd[1595]: 2025-11-04 04:59:15.020 [INFO][4394] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:15.069120 containerd[1595]: 2025-11-04 04:59:15.020 [INFO][4394] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:15.069120 containerd[1595]: 2025-11-04 04:59:15.022 [INFO][4394] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b Nov 4 04:59:15.069120 containerd[1595]: 2025-11-04 04:59:15.027 [INFO][4394] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:15.069120 containerd[1595]: 2025-11-04 04:59:15.035 [INFO][4394] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.69/26] block=192.168.121.64/26 handle="k8s-pod-network.f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:15.069120 containerd[1595]: 2025-11-04 04:59:15.035 [INFO][4394] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.69/26] handle="k8s-pod-network.f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:15.069120 containerd[1595]: 2025-11-04 04:59:15.035 [INFO][4394] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:59:15.069120 containerd[1595]: 2025-11-04 04:59:15.035 [INFO][4394] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.69/26] IPv6=[] ContainerID="f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b" HandleID="k8s-pod-network.f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b" Workload="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--2rvd7-eth0" Nov 4 04:59:15.072232 containerd[1595]: 2025-11-04 04:59:15.039 [INFO][4382] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b" Namespace="calico-apiserver" Pod="calico-apiserver-64fff7f795-2rvd7" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--2rvd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--2rvd7-eth0", GenerateName:"calico-apiserver-64fff7f795-", Namespace:"calico-apiserver", SelfLink:"", UID:"58ac8885-f887-4233-b0b8-becfde233cd2", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64fff7f795", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4508.0.0-n-4006da48af", ContainerID:"", Pod:"calico-apiserver-64fff7f795-2rvd7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie8dc3cf037a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:15.072232 containerd[1595]: 2025-11-04 04:59:15.039 [INFO][4382] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.69/32] ContainerID="f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b" Namespace="calico-apiserver" Pod="calico-apiserver-64fff7f795-2rvd7" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--2rvd7-eth0" Nov 4 04:59:15.072232 containerd[1595]: 2025-11-04 04:59:15.039 [INFO][4382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8dc3cf037a ContainerID="f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b" Namespace="calico-apiserver" Pod="calico-apiserver-64fff7f795-2rvd7" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--2rvd7-eth0" Nov 4 04:59:15.072232 containerd[1595]: 2025-11-04 04:59:15.047 [INFO][4382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b" Namespace="calico-apiserver" Pod="calico-apiserver-64fff7f795-2rvd7" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--2rvd7-eth0" Nov 4 04:59:15.072232 containerd[1595]: 2025-11-04 04:59:15.047 [INFO][4382] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b" Namespace="calico-apiserver" Pod="calico-apiserver-64fff7f795-2rvd7" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--2rvd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--2rvd7-eth0", GenerateName:"calico-apiserver-64fff7f795-", Namespace:"calico-apiserver", SelfLink:"", UID:"58ac8885-f887-4233-b0b8-becfde233cd2", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64fff7f795", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4508.0.0-n-4006da48af", ContainerID:"f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b", Pod:"calico-apiserver-64fff7f795-2rvd7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie8dc3cf037a", MAC:"b6:22:3b:27:f4:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:15.072232 containerd[1595]: 2025-11-04 04:59:15.064 [INFO][4382] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b" Namespace="calico-apiserver" Pod="calico-apiserver-64fff7f795-2rvd7" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--apiserver--64fff7f795--2rvd7-eth0" Nov 4 04:59:15.103698 containerd[1595]: time="2025-11-04T04:59:15.103580300Z" level=info msg="connecting to shim f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b" address="unix:///run/containerd/s/1cef023c5a250f79225b088f2abe6a704149ea7d6dffe85b244ea7dd9a91cb11" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:59:15.145719 systemd[1]: Started cri-containerd-f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b.scope - libcontainer container f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b. Nov 4 04:59:15.187418 kubelet[2748]: E1104 04:59:15.185507 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fff7f795-8w5t9" podUID="c80a2d93-8040-43fb-ae27-fda397ce6d05" Nov 4 04:59:15.187947 kubelet[2748]: E1104 04:59:15.187754 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:15.188665 kubelet[2748]: E1104 04:59:15.188548 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vv5n9" podUID="092f500e-4822-4935-b64c-fa41aafe316d" Nov 4 04:59:15.252872 containerd[1595]: time="2025-11-04T04:59:15.252829789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fff7f795-2rvd7,Uid:58ac8885-f887-4233-b0b8-becfde233cd2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f2c1ee8b905940c59fcff01c131ef879f874df1c1aa2c5993bcc3e005496242b\"" Nov 4 04:59:15.255967 containerd[1595]: time="2025-11-04T04:59:15.255906367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:59:15.581100 containerd[1595]: time="2025-11-04T04:59:15.580971738Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:15.582301 containerd[1595]: time="2025-11-04T04:59:15.582204587Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:59:15.582301 containerd[1595]: time="2025-11-04T04:59:15.582267479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:15.582877 kubelet[2748]: E1104 04:59:15.582799 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:15.582952 kubelet[2748]: E1104 04:59:15.582896 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:15.583232 kubelet[2748]: E1104 04:59:15.583148 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x9xrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64fff7f795-2rvd7_calico-apiserver(58ac8885-f887-4233-b0b8-becfde233cd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:15.584635 kubelet[2748]: E1104 04:59:15.584568 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fff7f795-2rvd7" podUID="58ac8885-f887-4233-b0b8-becfde233cd2" Nov 4 04:59:15.830961 systemd-networkd[1488]: cali92783c4d3c6: Gained IPv6LL Nov 4 04:59:15.899708 containerd[1595]: time="2025-11-04T04:59:15.899480491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b455dd7c6-8tlqq,Uid:222bf072-72e8-4f95-b557-9dabd6a2bea1,Namespace:calico-system,Attempt:0,}" Nov 4 04:59:15.899708 containerd[1595]: time="2025-11-04T04:59:15.899489021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-psj7t,Uid:a786dc4c-0c1b-411d-9e1c-798267553660,Namespace:calico-system,Attempt:0,}" Nov 4 04:59:16.108075 systemd-networkd[1488]: calidee31737a7e: Link UP Nov 4 04:59:16.111573 systemd-networkd[1488]: calidee31737a7e: Gained carrier Nov 4 04:59:16.135811 containerd[1595]: 2025-11-04 04:59:15.985 [INFO][4467] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4508.0.0--n--4006da48af-k8s-goldmane--666569f655--psj7t-eth0 goldmane-666569f655- calico-system a786dc4c-0c1b-411d-9e1c-798267553660 823 0 2025-11-04 04:58:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4508.0.0-n-4006da48af goldmane-666569f655-psj7t eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidee31737a7e [] [] }} ContainerID="ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de" Namespace="calico-system" Pod="goldmane-666569f655-psj7t" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-goldmane--666569f655--psj7t-" Nov 4 04:59:16.135811 containerd[1595]: 2025-11-04 04:59:15.985 [INFO][4467] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de" Namespace="calico-system" Pod="goldmane-666569f655-psj7t" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-goldmane--666569f655--psj7t-eth0" Nov 4 04:59:16.135811 containerd[1595]: 2025-11-04 04:59:16.039 [INFO][4484] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de" HandleID="k8s-pod-network.ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de" Workload="ci--4508.0.0--n--4006da48af-k8s-goldmane--666569f655--psj7t-eth0" Nov 4 04:59:16.135811 containerd[1595]: 2025-11-04 04:59:16.039 [INFO][4484] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de" HandleID="k8s-pod-network.ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de" Workload="ci--4508.0.0--n--4006da48af-k8s-goldmane--666569f655--psj7t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4508.0.0-n-4006da48af", "pod":"goldmane-666569f655-psj7t", "timestamp":"2025-11-04 04:59:16.039036509 +0000 UTC"}, Hostname:"ci-4508.0.0-n-4006da48af", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:59:16.135811 containerd[1595]: 2025-11-04 04:59:16.039 [INFO][4484] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:59:16.135811 containerd[1595]: 2025-11-04 04:59:16.039 [INFO][4484] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:59:16.135811 containerd[1595]: 2025-11-04 04:59:16.039 [INFO][4484] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4508.0.0-n-4006da48af' Nov 4 04:59:16.135811 containerd[1595]: 2025-11-04 04:59:16.050 [INFO][4484] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:16.135811 containerd[1595]: 2025-11-04 04:59:16.060 [INFO][4484] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:16.135811 containerd[1595]: 2025-11-04 04:59:16.068 [INFO][4484] ipam/ipam.go 511: Trying affinity for 192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:16.135811 containerd[1595]: 2025-11-04 04:59:16.071 [INFO][4484] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:16.135811 containerd[1595]: 2025-11-04 04:59:16.076 [INFO][4484] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:16.135811 containerd[1595]: 2025-11-04 04:59:16.076 [INFO][4484] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:16.135811 containerd[1595]: 2025-11-04 04:59:16.080 [INFO][4484] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de Nov 4 04:59:16.135811 containerd[1595]: 2025-11-04 04:59:16.086 [INFO][4484] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:16.135811 containerd[1595]: 2025-11-04 04:59:16.096 [INFO][4484] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.70/26] block=192.168.121.64/26 handle="k8s-pod-network.ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:16.135811 containerd[1595]: 2025-11-04 04:59:16.096 [INFO][4484] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.70/26] handle="k8s-pod-network.ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:16.135811 containerd[1595]: 2025-11-04 04:59:16.096 [INFO][4484] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:59:16.135811 containerd[1595]: 2025-11-04 04:59:16.096 [INFO][4484] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.70/26] IPv6=[] ContainerID="ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de" HandleID="k8s-pod-network.ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de" Workload="ci--4508.0.0--n--4006da48af-k8s-goldmane--666569f655--psj7t-eth0" Nov 4 04:59:16.138723 containerd[1595]: 2025-11-04 04:59:16.100 [INFO][4467] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de" Namespace="calico-system" Pod="goldmane-666569f655-psj7t" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-goldmane--666569f655--psj7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4508.0.0--n--4006da48af-k8s-goldmane--666569f655--psj7t-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a786dc4c-0c1b-411d-9e1c-798267553660", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4508.0.0-n-4006da48af", ContainerID:"", Pod:"goldmane-666569f655-psj7t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.121.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidee31737a7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:16.138723 containerd[1595]: 2025-11-04 04:59:16.101 [INFO][4467] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.70/32] ContainerID="ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de" Namespace="calico-system" Pod="goldmane-666569f655-psj7t" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-goldmane--666569f655--psj7t-eth0" Nov 4 04:59:16.138723 containerd[1595]: 2025-11-04 04:59:16.101 [INFO][4467] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidee31737a7e ContainerID="ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de" Namespace="calico-system" Pod="goldmane-666569f655-psj7t" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-goldmane--666569f655--psj7t-eth0" Nov 4 04:59:16.138723 containerd[1595]: 2025-11-04 04:59:16.110 [INFO][4467] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de" Namespace="calico-system" Pod="goldmane-666569f655-psj7t" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-goldmane--666569f655--psj7t-eth0" Nov 4 04:59:16.138723 containerd[1595]: 2025-11-04 04:59:16.114 [INFO][4467] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de" Namespace="calico-system" Pod="goldmane-666569f655-psj7t" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-goldmane--666569f655--psj7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4508.0.0--n--4006da48af-k8s-goldmane--666569f655--psj7t-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a786dc4c-0c1b-411d-9e1c-798267553660", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4508.0.0-n-4006da48af", ContainerID:"ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de", Pod:"goldmane-666569f655-psj7t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.121.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidee31737a7e", MAC:"6e:58:d3:67:ff:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:16.138723 containerd[1595]: 2025-11-04 04:59:16.131 [INFO][4467] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de" Namespace="calico-system" Pod="goldmane-666569f655-psj7t" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-goldmane--666569f655--psj7t-eth0" Nov 4 04:59:16.185948 containerd[1595]: time="2025-11-04T04:59:16.185846629Z" level=info msg="connecting to shim ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de" address="unix:///run/containerd/s/c7783659fa69b9b458460490ddb77f589b59b4fb80f1427bcb8f4e20b9d270ba" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:59:16.198431 kubelet[2748]: E1104 04:59:16.197749 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fff7f795-2rvd7" podUID="58ac8885-f887-4233-b0b8-becfde233cd2" Nov 4 04:59:16.199750 kubelet[2748]: E1104 04:59:16.198959 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vv5n9" podUID="092f500e-4822-4935-b64c-fa41aafe316d" Nov 4 04:59:16.246887 systemd[1]: Started cri-containerd-ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de.scope - libcontainer container ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de. Nov 4 04:59:16.275129 systemd-networkd[1488]: calia4e68665a5d: Link UP Nov 4 04:59:16.285599 systemd-networkd[1488]: calia4e68665a5d: Gained carrier Nov 4 04:59:16.322459 containerd[1595]: 2025-11-04 04:59:15.994 [INFO][4458] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4508.0.0--n--4006da48af-k8s-calico--kube--controllers--b455dd7c6--8tlqq-eth0 calico-kube-controllers-b455dd7c6- calico-system 222bf072-72e8-4f95-b557-9dabd6a2bea1 826 0 2025-11-04 04:58:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b455dd7c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4508.0.0-n-4006da48af calico-kube-controllers-b455dd7c6-8tlqq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia4e68665a5d [] [] }} ContainerID="d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2" Namespace="calico-system" Pod="calico-kube-controllers-b455dd7c6-8tlqq" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--kube--controllers--b455dd7c6--8tlqq-" Nov 4 04:59:16.322459 containerd[1595]: 2025-11-04 04:59:15.994 [INFO][4458] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2" Namespace="calico-system" Pod="calico-kube-controllers-b455dd7c6-8tlqq" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--kube--controllers--b455dd7c6--8tlqq-eth0" Nov 4 04:59:16.322459 containerd[1595]: 2025-11-04 04:59:16.056 [INFO][4489] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2" HandleID="k8s-pod-network.d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2" Workload="ci--4508.0.0--n--4006da48af-k8s-calico--kube--controllers--b455dd7c6--8tlqq-eth0" Nov 4 04:59:16.322459 containerd[1595]: 2025-11-04 04:59:16.058 [INFO][4489] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2" HandleID="k8s-pod-network.d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2" Workload="ci--4508.0.0--n--4006da48af-k8s-calico--kube--controllers--b455dd7c6--8tlqq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4508.0.0-n-4006da48af", "pod":"calico-kube-controllers-b455dd7c6-8tlqq", "timestamp":"2025-11-04 04:59:16.056485421 +0000 UTC"}, Hostname:"ci-4508.0.0-n-4006da48af", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:59:16.322459 containerd[1595]: 2025-11-04 04:59:16.058 [INFO][4489] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:59:16.322459 containerd[1595]: 2025-11-04 04:59:16.096 [INFO][4489] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:59:16.322459 containerd[1595]: 2025-11-04 04:59:16.096 [INFO][4489] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4508.0.0-n-4006da48af' Nov 4 04:59:16.322459 containerd[1595]: 2025-11-04 04:59:16.153 [INFO][4489] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:16.322459 containerd[1595]: 2025-11-04 04:59:16.172 [INFO][4489] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:16.322459 containerd[1595]: 2025-11-04 04:59:16.181 [INFO][4489] ipam/ipam.go 511: Trying affinity for 192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:16.322459 containerd[1595]: 2025-11-04 04:59:16.187 [INFO][4489] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:16.322459 containerd[1595]: 2025-11-04 04:59:16.192 [INFO][4489] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:16.322459 containerd[1595]: 2025-11-04 04:59:16.193 [INFO][4489] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:16.322459 containerd[1595]: 2025-11-04 04:59:16.195 [INFO][4489] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2 Nov 4 04:59:16.322459 containerd[1595]: 2025-11-04 04:59:16.212 [INFO][4489] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:16.322459 containerd[1595]: 2025-11-04 04:59:16.245 [INFO][4489] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.71/26] block=192.168.121.64/26 handle="k8s-pod-network.d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:16.322459 containerd[1595]: 2025-11-04 04:59:16.245 [INFO][4489] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.71/26] handle="k8s-pod-network.d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:16.322459 containerd[1595]: 2025-11-04 04:59:16.246 [INFO][4489] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:59:16.322459 containerd[1595]: 2025-11-04 04:59:16.246 [INFO][4489] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.71/26] IPv6=[] ContainerID="d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2" HandleID="k8s-pod-network.d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2" Workload="ci--4508.0.0--n--4006da48af-k8s-calico--kube--controllers--b455dd7c6--8tlqq-eth0" Nov 4 04:59:16.323247 containerd[1595]: 2025-11-04 04:59:16.253 [INFO][4458] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2" Namespace="calico-system" Pod="calico-kube-controllers-b455dd7c6-8tlqq" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--kube--controllers--b455dd7c6--8tlqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4508.0.0--n--4006da48af-k8s-calico--kube--controllers--b455dd7c6--8tlqq-eth0", GenerateName:"calico-kube-controllers-b455dd7c6-", Namespace:"calico-system", SelfLink:"", UID:"222bf072-72e8-4f95-b557-9dabd6a2bea1", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b455dd7c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4508.0.0-n-4006da48af", ContainerID:"", Pod:"calico-kube-controllers-b455dd7c6-8tlqq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia4e68665a5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:16.323247 containerd[1595]: 2025-11-04 04:59:16.254 [INFO][4458] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.71/32] ContainerID="d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2" Namespace="calico-system" Pod="calico-kube-controllers-b455dd7c6-8tlqq" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--kube--controllers--b455dd7c6--8tlqq-eth0" Nov 4 04:59:16.323247 containerd[1595]: 2025-11-04 04:59:16.254 [INFO][4458] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4e68665a5d ContainerID="d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2" Namespace="calico-system" Pod="calico-kube-controllers-b455dd7c6-8tlqq" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--kube--controllers--b455dd7c6--8tlqq-eth0" Nov 4 04:59:16.323247 containerd[1595]: 2025-11-04 04:59:16.294 [INFO][4458] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2" Namespace="calico-system" Pod="calico-kube-controllers-b455dd7c6-8tlqq" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--kube--controllers--b455dd7c6--8tlqq-eth0" Nov 4 04:59:16.323247 containerd[1595]: 2025-11-04 04:59:16.299 [INFO][4458] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2" Namespace="calico-system" Pod="calico-kube-controllers-b455dd7c6-8tlqq" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--kube--controllers--b455dd7c6--8tlqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4508.0.0--n--4006da48af-k8s-calico--kube--controllers--b455dd7c6--8tlqq-eth0", GenerateName:"calico-kube-controllers-b455dd7c6-", Namespace:"calico-system", SelfLink:"", UID:"222bf072-72e8-4f95-b557-9dabd6a2bea1", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b455dd7c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4508.0.0-n-4006da48af", ContainerID:"d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2", Pod:"calico-kube-controllers-b455dd7c6-8tlqq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia4e68665a5d", MAC:"2e:f3:c4:98:eb:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:16.323247 containerd[1595]: 2025-11-04 04:59:16.313 [INFO][4458] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2" Namespace="calico-system" Pod="calico-kube-controllers-b455dd7c6-8tlqq" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-calico--kube--controllers--b455dd7c6--8tlqq-eth0" Nov 4 04:59:16.368075 containerd[1595]: time="2025-11-04T04:59:16.366198351Z" level=info msg="connecting to shim d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2" address="unix:///run/containerd/s/eb03ffdd13a68ca3037ac143feb73734a57966b83c506601dea885d16a7fe473" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:59:16.410919 systemd[1]: Started cri-containerd-d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2.scope - libcontainer container d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2. Nov 4 04:59:16.446245 containerd[1595]: time="2025-11-04T04:59:16.446187334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-psj7t,Uid:a786dc4c-0c1b-411d-9e1c-798267553660,Namespace:calico-system,Attempt:0,} returns sandbox id \"ff1b3e67880821054242da65eb2ba9dc701bfd02c45ce3d1492fc8cae23b58de\"" Nov 4 04:59:16.449735 containerd[1595]: time="2025-11-04T04:59:16.449472521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 04:59:16.492853 containerd[1595]: time="2025-11-04T04:59:16.492794364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b455dd7c6-8tlqq,Uid:222bf072-72e8-4f95-b557-9dabd6a2bea1,Namespace:calico-system,Attempt:0,} returns sandbox id \"d64a46fe2353a39403cb208089f8a039bd68f9c5e967a75a583e6cfaba77eec2\"" Nov 4 04:59:16.535640 systemd-networkd[1488]: calie8dc3cf037a: Gained IPv6LL Nov 4 04:59:16.756203 containerd[1595]: time="2025-11-04T04:59:16.756114174Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:16.757259 containerd[1595]: time="2025-11-04T04:59:16.757200081Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 04:59:16.757380 containerd[1595]: time="2025-11-04T04:59:16.757332025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:16.757660 kubelet[2748]: E1104 04:59:16.757603 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:59:16.757933 kubelet[2748]: E1104 04:59:16.757671 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:59:16.758312 kubelet[2748]: E1104 04:59:16.758101 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plp4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-psj7t_calico-system(a786dc4c-0c1b-411d-9e1c-798267553660): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:16.758846 containerd[1595]: time="2025-11-04T04:59:16.758739651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 04:59:16.759882 kubelet[2748]: E1104 04:59:16.759728 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-psj7t" podUID="a786dc4c-0c1b-411d-9e1c-798267553660" Nov 4 04:59:16.898325 kubelet[2748]: E1104 04:59:16.898279 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:16.899464 containerd[1595]: time="2025-11-04T04:59:16.899376394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xsjpm,Uid:a7b4ab04-49be-48a0-9728-5a995e7ce19d,Namespace:kube-system,Attempt:0,}" Nov 4 04:59:17.044101 systemd-networkd[1488]: cali01d412fe433: Link UP Nov 4 04:59:17.046562 systemd-networkd[1488]: cali01d412fe433: Gained carrier Nov 4 04:59:17.075582 containerd[1595]: 2025-11-04 04:59:16.952 [INFO][4621] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--xsjpm-eth0 coredns-668d6bf9bc- kube-system a7b4ab04-49be-48a0-9728-5a995e7ce19d 820 0 2025-11-04 04:58:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4508.0.0-n-4006da48af coredns-668d6bf9bc-xsjpm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali01d412fe433 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818" Namespace="kube-system" Pod="coredns-668d6bf9bc-xsjpm" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--xsjpm-" Nov 4 04:59:17.075582 containerd[1595]: 2025-11-04 04:59:16.952 [INFO][4621] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818" Namespace="kube-system" Pod="coredns-668d6bf9bc-xsjpm" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--xsjpm-eth0" Nov 4 04:59:17.075582 containerd[1595]: 2025-11-04 04:59:16.983 [INFO][4633] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818" HandleID="k8s-pod-network.dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818" Workload="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--xsjpm-eth0" Nov 4 04:59:17.075582 containerd[1595]: 2025-11-04 04:59:16.983 [INFO][4633] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818" HandleID="k8s-pod-network.dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818" Workload="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--xsjpm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f620), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4508.0.0-n-4006da48af", "pod":"coredns-668d6bf9bc-xsjpm", "timestamp":"2025-11-04 04:59:16.983139451 +0000 UTC"}, Hostname:"ci-4508.0.0-n-4006da48af", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:59:17.075582 containerd[1595]: 2025-11-04 04:59:16.983 [INFO][4633] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:59:17.075582 containerd[1595]: 2025-11-04 04:59:16.983 [INFO][4633] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:59:17.075582 containerd[1595]: 2025-11-04 04:59:16.983 [INFO][4633] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4508.0.0-n-4006da48af' Nov 4 04:59:17.075582 containerd[1595]: 2025-11-04 04:59:16.992 [INFO][4633] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:17.075582 containerd[1595]: 2025-11-04 04:59:17.000 [INFO][4633] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:17.075582 containerd[1595]: 2025-11-04 04:59:17.007 [INFO][4633] ipam/ipam.go 511: Trying affinity for 192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:17.075582 containerd[1595]: 2025-11-04 04:59:17.012 [INFO][4633] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:17.075582 containerd[1595]: 2025-11-04 04:59:17.015 [INFO][4633] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:17.075582 containerd[1595]: 2025-11-04 04:59:17.015 [INFO][4633] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:17.075582 containerd[1595]: 2025-11-04 04:59:17.017 [INFO][4633] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818 Nov 4 04:59:17.075582 containerd[1595]: 2025-11-04 04:59:17.023 [INFO][4633] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:17.075582 containerd[1595]: 2025-11-04 04:59:17.034 [INFO][4633] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.72/26] block=192.168.121.64/26 handle="k8s-pod-network.dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:17.075582 containerd[1595]: 2025-11-04 04:59:17.034 [INFO][4633] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.72/26] handle="k8s-pod-network.dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818" host="ci-4508.0.0-n-4006da48af" Nov 4 04:59:17.075582 containerd[1595]: 2025-11-04 04:59:17.034 [INFO][4633] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:59:17.075582 containerd[1595]: 2025-11-04 04:59:17.034 [INFO][4633] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.72/26] IPv6=[] ContainerID="dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818" HandleID="k8s-pod-network.dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818" Workload="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--xsjpm-eth0" Nov 4 04:59:17.077714 containerd[1595]: 2025-11-04 04:59:17.038 [INFO][4621] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818" Namespace="kube-system" Pod="coredns-668d6bf9bc-xsjpm" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--xsjpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--xsjpm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a7b4ab04-49be-48a0-9728-5a995e7ce19d", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4508.0.0-n-4006da48af", ContainerID:"", Pod:"coredns-668d6bf9bc-xsjpm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01d412fe433", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:17.077714 containerd[1595]: 2025-11-04 04:59:17.038 [INFO][4621] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.72/32] ContainerID="dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818" Namespace="kube-system" Pod="coredns-668d6bf9bc-xsjpm" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--xsjpm-eth0" Nov 4 04:59:17.077714 containerd[1595]: 2025-11-04 04:59:17.039 [INFO][4621] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01d412fe433 ContainerID="dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818" Namespace="kube-system" Pod="coredns-668d6bf9bc-xsjpm" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--xsjpm-eth0" Nov 4 04:59:17.077714 containerd[1595]: 2025-11-04 04:59:17.046 [INFO][4621] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818" Namespace="kube-system" Pod="coredns-668d6bf9bc-xsjpm" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--xsjpm-eth0" Nov 4 04:59:17.077714 containerd[1595]: 2025-11-04 04:59:17.047 [INFO][4621] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818" Namespace="kube-system" Pod="coredns-668d6bf9bc-xsjpm" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--xsjpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--xsjpm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a7b4ab04-49be-48a0-9728-5a995e7ce19d", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 58, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4508.0.0-n-4006da48af", ContainerID:"dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818", Pod:"coredns-668d6bf9bc-xsjpm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01d412fe433", MAC:"9a:6f:f7:15:49:0d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:59:17.077940 containerd[1595]: 2025-11-04 04:59:17.069 [INFO][4621] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818" Namespace="kube-system" Pod="coredns-668d6bf9bc-xsjpm" WorkloadEndpoint="ci--4508.0.0--n--4006da48af-k8s-coredns--668d6bf9bc--xsjpm-eth0" Nov 4 04:59:17.102820 containerd[1595]: time="2025-11-04T04:59:17.102765827Z" level=info msg="connecting to shim dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818" address="unix:///run/containerd/s/d679b210967491857d6f09e0a4ad317fe97bf54708c879a54fc5473dd08c4ee7" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:59:17.105010 containerd[1595]: time="2025-11-04T04:59:17.104844246Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:17.106018 containerd[1595]: time="2025-11-04T04:59:17.105811589Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 04:59:17.106536 containerd[1595]: time="2025-11-04T04:59:17.106157148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:17.106712 kubelet[2748]: E1104 04:59:17.106669 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:59:17.106781 kubelet[2748]: E1104 04:59:17.106727 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:59:17.106920 kubelet[2748]: E1104 04:59:17.106866 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hgb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b455dd7c6-8tlqq_calico-system(222bf072-72e8-4f95-b557-9dabd6a2bea1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:17.108876 kubelet[2748]: E1104 04:59:17.108819 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b455dd7c6-8tlqq" podUID="222bf072-72e8-4f95-b557-9dabd6a2bea1" Nov 4 04:59:17.142767 systemd[1]: Started cri-containerd-dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818.scope - libcontainer container dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818. Nov 4 04:59:17.205695 kubelet[2748]: E1104 04:59:17.205638 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b455dd7c6-8tlqq" podUID="222bf072-72e8-4f95-b557-9dabd6a2bea1" Nov 4 04:59:17.210171 kubelet[2748]: E1104 04:59:17.210080 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-psj7t" podUID="a786dc4c-0c1b-411d-9e1c-798267553660" Nov 4 04:59:17.211629 kubelet[2748]: E1104 04:59:17.211562 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fff7f795-2rvd7" podUID="58ac8885-f887-4233-b0b8-becfde233cd2" Nov 4 04:59:17.236489 containerd[1595]: time="2025-11-04T04:59:17.235380365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xsjpm,Uid:a7b4ab04-49be-48a0-9728-5a995e7ce19d,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818\"" Nov 4 04:59:17.237508 kubelet[2748]: E1104 04:59:17.237446 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:17.240574 containerd[1595]: time="2025-11-04T04:59:17.240531673Z" level=info msg="CreateContainer within sandbox \"dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 04:59:17.268630 containerd[1595]: time="2025-11-04T04:59:17.267003988Z" level=info msg="Container 7a8b7d7178d33df94511b3e4476db72573703a2da865bb33a88863767a219065: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:59:17.275085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2524360550.mount: Deactivated successfully. Nov 4 04:59:17.283596 containerd[1595]: time="2025-11-04T04:59:17.282973229Z" level=info msg="CreateContainer within sandbox \"dc91451005eb97ba382620455d0c247543ecea8c1d40e0ad471818226b557818\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7a8b7d7178d33df94511b3e4476db72573703a2da865bb33a88863767a219065\"" Nov 4 04:59:17.285170 containerd[1595]: time="2025-11-04T04:59:17.285129921Z" level=info msg="StartContainer for \"7a8b7d7178d33df94511b3e4476db72573703a2da865bb33a88863767a219065\"" Nov 4 04:59:17.287277 containerd[1595]: time="2025-11-04T04:59:17.287241239Z" level=info msg="connecting to shim 7a8b7d7178d33df94511b3e4476db72573703a2da865bb33a88863767a219065" address="unix:///run/containerd/s/d679b210967491857d6f09e0a4ad317fe97bf54708c879a54fc5473dd08c4ee7" protocol=ttrpc version=3 Nov 4 04:59:17.328749 systemd[1]: Started cri-containerd-7a8b7d7178d33df94511b3e4476db72573703a2da865bb33a88863767a219065.scope - libcontainer container 7a8b7d7178d33df94511b3e4476db72573703a2da865bb33a88863767a219065. Nov 4 04:59:17.397193 containerd[1595]: time="2025-11-04T04:59:17.397136466Z" level=info msg="StartContainer for \"7a8b7d7178d33df94511b3e4476db72573703a2da865bb33a88863767a219065\" returns successfully" Nov 4 04:59:17.558792 systemd-networkd[1488]: calia4e68665a5d: Gained IPv6LL Nov 4 04:59:17.622713 systemd-networkd[1488]: calidee31737a7e: Gained IPv6LL Nov 4 04:59:18.213936 kubelet[2748]: E1104 04:59:18.213772 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:18.215917 kubelet[2748]: E1104 04:59:18.215693 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-psj7t" podUID="a786dc4c-0c1b-411d-9e1c-798267553660" Nov 4 04:59:18.216204 kubelet[2748]: E1104 04:59:18.216001 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b455dd7c6-8tlqq" podUID="222bf072-72e8-4f95-b557-9dabd6a2bea1" Nov 4 04:59:18.249421 kubelet[2748]: I1104 04:59:18.248759 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xsjpm" podStartSLOduration=44.248731326 podStartE2EDuration="44.248731326s" podCreationTimestamp="2025-11-04 04:58:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:59:18.248643746 +0000 UTC m=+48.541631029" watchObservedRunningTime="2025-11-04 04:59:18.248731326 +0000 UTC m=+48.541718611" Nov 4 04:59:18.711203 systemd-networkd[1488]: cali01d412fe433: Gained IPv6LL Nov 4 04:59:19.216338 kubelet[2748]: E1104 04:59:19.216092 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:20.218911 kubelet[2748]: E1104 04:59:20.218859 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:20.910030 kubelet[2748]: I1104 04:59:20.909129 2748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 04:59:20.910030 kubelet[2748]: E1104 04:59:20.909595 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:21.231959 kubelet[2748]: E1104 04:59:21.229808 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:21.233463 kubelet[2748]: E1104 04:59:21.232993 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:26.901678 containerd[1595]: time="2025-11-04T04:59:26.901622765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 04:59:27.225718 containerd[1595]: time="2025-11-04T04:59:27.225662960Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:27.226940 containerd[1595]: time="2025-11-04T04:59:27.226887442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 04:59:27.227144 containerd[1595]: time="2025-11-04T04:59:27.226992404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:27.227407 kubelet[2748]: E1104 04:59:27.227331 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:59:27.228285 kubelet[2748]: E1104 04:59:27.227492 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:59:27.228285 kubelet[2748]: E1104 04:59:27.228071 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c2e3b148ce3f482bae29904bdedc5907,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cs49m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f8f9884d7-2qd2d_calico-system(d8570cf7-4cce-4759-8eb4-4f57fafd9490): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:27.231264 containerd[1595]: time="2025-11-04T04:59:27.231218619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 04:59:27.535495 containerd[1595]: time="2025-11-04T04:59:27.535296498Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:27.538425 containerd[1595]: time="2025-11-04T04:59:27.536668052Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 04:59:27.538425 containerd[1595]: time="2025-11-04T04:59:27.536699454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:27.538933 kubelet[2748]: E1104 04:59:27.538876 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:59:27.539128 kubelet[2748]: E1104 04:59:27.539037 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:59:27.539405 kubelet[2748]: E1104 04:59:27.539352 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cs49m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f8f9884d7-2qd2d_calico-system(d8570cf7-4cce-4759-8eb4-4f57fafd9490): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:27.540753 kubelet[2748]: E1104 04:59:27.540688 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f8f9884d7-2qd2d" podUID="d8570cf7-4cce-4759-8eb4-4f57fafd9490" Nov 4 04:59:28.901606 containerd[1595]: time="2025-11-04T04:59:28.901460156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:59:29.215918 containerd[1595]: time="2025-11-04T04:59:29.215847014Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:29.217816 containerd[1595]: time="2025-11-04T04:59:29.217516078Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:59:29.218366 containerd[1595]: time="2025-11-04T04:59:29.217562741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:29.219733 kubelet[2748]: E1104 04:59:29.218537 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:29.219733 kubelet[2748]: E1104 04:59:29.218606 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:29.219733 kubelet[2748]: E1104 04:59:29.218791 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x9xrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64fff7f795-2rvd7_calico-apiserver(58ac8885-f887-4233-b0b8-becfde233cd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:29.221180 kubelet[2748]: E1104 04:59:29.221042 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fff7f795-2rvd7" podUID="58ac8885-f887-4233-b0b8-becfde233cd2" Nov 4 04:59:29.279037 systemd[1]: Started sshd@7-164.92.104.185:22-147.75.109.163:43844.service - OpenSSH per-connection server daemon (147.75.109.163:43844). Nov 4 04:59:29.438278 sshd[4796]: Accepted publickey for core from 147.75.109.163 port 43844 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 04:59:29.440655 sshd-session[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:29.448596 systemd-logind[1566]: New session 8 of user core. Nov 4 04:59:29.454642 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 04:59:30.019484 containerd[1595]: time="2025-11-04T04:59:30.016999584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:59:30.272445 sshd[4799]: Connection closed by 147.75.109.163 port 43844 Nov 4 04:59:30.273981 sshd-session[4796]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:30.283745 systemd[1]: sshd@7-164.92.104.185:22-147.75.109.163:43844.service: Deactivated successfully. Nov 4 04:59:30.287189 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 04:59:30.291365 systemd-logind[1566]: Session 8 logged out. Waiting for processes to exit. Nov 4 04:59:30.294627 systemd-logind[1566]: Removed session 8. Nov 4 04:59:30.331732 containerd[1595]: time="2025-11-04T04:59:30.331690078Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:30.333449 containerd[1595]: time="2025-11-04T04:59:30.333350122Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:59:30.333449 containerd[1595]: time="2025-11-04T04:59:30.333451256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:30.335347 kubelet[2748]: E1104 04:59:30.334472 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:30.335347 kubelet[2748]: E1104 04:59:30.334557 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:30.335347 kubelet[2748]: E1104 04:59:30.335001 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24rgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64fff7f795-8w5t9_calico-apiserver(c80a2d93-8040-43fb-ae27-fda397ce6d05): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:30.336281 containerd[1595]: time="2025-11-04T04:59:30.336140675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 04:59:30.336353 kubelet[2748]: E1104 04:59:30.336208 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fff7f795-8w5t9" podUID="c80a2d93-8040-43fb-ae27-fda397ce6d05" Nov 4 04:59:30.660962 containerd[1595]: time="2025-11-04T04:59:30.659653502Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:30.661809 containerd[1595]: time="2025-11-04T04:59:30.661702732Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 04:59:30.661809 containerd[1595]: time="2025-11-04T04:59:30.661757011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:30.662269 kubelet[2748]: E1104 04:59:30.662148 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:59:30.662380 kubelet[2748]: E1104 04:59:30.662360 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:59:30.662807 kubelet[2748]: E1104 04:59:30.662730 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptmjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vv5n9_calico-system(092f500e-4822-4935-b64c-fa41aafe316d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:30.665270 containerd[1595]: time="2025-11-04T04:59:30.665235863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 04:59:30.973008 containerd[1595]: time="2025-11-04T04:59:30.972943603Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:30.975450 containerd[1595]: time="2025-11-04T04:59:30.975373449Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 04:59:30.975621 containerd[1595]: time="2025-11-04T04:59:30.975511220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:30.975784 kubelet[2748]: E1104 04:59:30.975730 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:59:30.975834 kubelet[2748]: E1104 04:59:30.975803 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:59:30.976097 kubelet[2748]: E1104 04:59:30.976053 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptmjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vv5n9_calico-system(092f500e-4822-4935-b64c-fa41aafe316d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:30.977006 containerd[1595]: time="2025-11-04T04:59:30.976943820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 04:59:30.978016 kubelet[2748]: E1104 04:59:30.977950 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vv5n9" podUID="092f500e-4822-4935-b64c-fa41aafe316d" Nov 4 04:59:31.292330 containerd[1595]: time="2025-11-04T04:59:31.291626233Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:31.292750 containerd[1595]: time="2025-11-04T04:59:31.292512477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 04:59:31.292750 containerd[1595]: time="2025-11-04T04:59:31.292577694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:31.293603 kubelet[2748]: E1104 04:59:31.293548 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:59:31.293965 kubelet[2748]: E1104 04:59:31.293736 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:59:31.293965 kubelet[2748]: E1104 04:59:31.293890 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plp4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-psj7t_calico-system(a786dc4c-0c1b-411d-9e1c-798267553660): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:31.295211 kubelet[2748]: E1104 04:59:31.295171 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-psj7t" podUID="a786dc4c-0c1b-411d-9e1c-798267553660" Nov 4 04:59:32.901167 containerd[1595]: time="2025-11-04T04:59:32.901128156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 04:59:33.209568 containerd[1595]: time="2025-11-04T04:59:33.209469899Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:33.210606 containerd[1595]: time="2025-11-04T04:59:33.210505975Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 04:59:33.210734 containerd[1595]: time="2025-11-04T04:59:33.210626277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:33.210933 kubelet[2748]: E1104 04:59:33.210890 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:59:33.211256 kubelet[2748]: E1104 04:59:33.210948 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:59:33.211256 kubelet[2748]: E1104 04:59:33.211173 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hgb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b455dd7c6-8tlqq_calico-system(222bf072-72e8-4f95-b557-9dabd6a2bea1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:33.212553 kubelet[2748]: E1104 04:59:33.212514 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b455dd7c6-8tlqq" podUID="222bf072-72e8-4f95-b557-9dabd6a2bea1" Nov 4 04:59:35.297678 systemd[1]: Started sshd@8-164.92.104.185:22-147.75.109.163:57898.service - OpenSSH per-connection server daemon (147.75.109.163:57898). Nov 4 04:59:35.392417 sshd[4820]: Accepted publickey for core from 147.75.109.163 port 57898 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 04:59:35.396067 sshd-session[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:35.404631 systemd-logind[1566]: New session 9 of user core. Nov 4 04:59:35.413646 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 04:59:35.576506 sshd[4823]: Connection closed by 147.75.109.163 port 57898 Nov 4 04:59:35.575004 sshd-session[4820]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:35.581348 systemd[1]: sshd@8-164.92.104.185:22-147.75.109.163:57898.service: Deactivated successfully. Nov 4 04:59:35.587019 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 04:59:35.588818 systemd-logind[1566]: Session 9 logged out. Waiting for processes to exit. Nov 4 04:59:35.590176 systemd-logind[1566]: Removed session 9. Nov 4 04:59:38.905014 kubelet[2748]: E1104 04:59:38.904446 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:39.903369 kubelet[2748]: E1104 04:59:39.903279 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f8f9884d7-2qd2d" podUID="d8570cf7-4cce-4759-8eb4-4f57fafd9490" Nov 4 04:59:40.599053 systemd[1]: Started sshd@9-164.92.104.185:22-147.75.109.163:57868.service - OpenSSH per-connection server daemon (147.75.109.163:57868). Nov 4 04:59:40.694798 sshd[4842]: Accepted publickey for core from 147.75.109.163 port 57868 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 04:59:40.697453 sshd-session[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:40.706246 systemd-logind[1566]: New session 10 of user core. Nov 4 04:59:40.713793 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 04:59:40.868914 sshd[4845]: Connection closed by 147.75.109.163 port 57868 Nov 4 04:59:40.869345 sshd-session[4842]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:40.882781 systemd[1]: sshd@9-164.92.104.185:22-147.75.109.163:57868.service: Deactivated successfully. Nov 4 04:59:40.886127 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 04:59:40.887857 systemd-logind[1566]: Session 10 logged out. Waiting for processes to exit. Nov 4 04:59:40.892917 systemd-logind[1566]: Removed session 10. Nov 4 04:59:40.895689 systemd[1]: Started sshd@10-164.92.104.185:22-147.75.109.163:57874.service - OpenSSH per-connection server daemon (147.75.109.163:57874). Nov 4 04:59:40.982174 sshd[4858]: Accepted publickey for core from 147.75.109.163 port 57874 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 04:59:40.984985 sshd-session[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:40.993020 systemd-logind[1566]: New session 11 of user core. Nov 4 04:59:40.998889 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 04:59:41.184555 sshd[4861]: Connection closed by 147.75.109.163 port 57874 Nov 4 04:59:41.187152 sshd-session[4858]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:41.205667 systemd[1]: sshd@10-164.92.104.185:22-147.75.109.163:57874.service: Deactivated successfully. Nov 4 04:59:41.214815 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 04:59:41.219407 systemd-logind[1566]: Session 11 logged out. Waiting for processes to exit. Nov 4 04:59:41.232595 systemd[1]: Started sshd@11-164.92.104.185:22-147.75.109.163:57882.service - OpenSSH per-connection server daemon (147.75.109.163:57882). Nov 4 04:59:41.234559 systemd-logind[1566]: Removed session 11. Nov 4 04:59:41.311159 sshd[4871]: Accepted publickey for core from 147.75.109.163 port 57882 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 04:59:41.312337 sshd-session[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:41.323534 systemd-logind[1566]: New session 12 of user core. Nov 4 04:59:41.326649 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 04:59:41.449835 sshd[4874]: Connection closed by 147.75.109.163 port 57882 Nov 4 04:59:41.450693 sshd-session[4871]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:41.459191 systemd[1]: sshd@11-164.92.104.185:22-147.75.109.163:57882.service: Deactivated successfully. Nov 4 04:59:41.463144 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 04:59:41.464734 systemd-logind[1566]: Session 12 logged out. Waiting for processes to exit. Nov 4 04:59:41.468433 systemd-logind[1566]: Removed session 12. Nov 4 04:59:42.901169 kubelet[2748]: E1104 04:59:42.901111 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fff7f795-8w5t9" podUID="c80a2d93-8040-43fb-ae27-fda397ce6d05" Nov 4 04:59:43.901842 kubelet[2748]: E1104 04:59:43.901297 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fff7f795-2rvd7" podUID="58ac8885-f887-4233-b0b8-becfde233cd2" Nov 4 04:59:44.899750 kubelet[2748]: E1104 04:59:44.899345 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-psj7t" podUID="a786dc4c-0c1b-411d-9e1c-798267553660" Nov 4 04:59:45.903619 kubelet[2748]: E1104 04:59:45.903432 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vv5n9" podUID="092f500e-4822-4935-b64c-fa41aafe316d" Nov 4 04:59:46.470244 systemd[1]: Started sshd@12-164.92.104.185:22-147.75.109.163:57884.service - OpenSSH per-connection server daemon (147.75.109.163:57884). Nov 4 04:59:46.559877 sshd[4891]: Accepted publickey for core from 147.75.109.163 port 57884 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 04:59:46.563724 sshd-session[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:46.574479 systemd-logind[1566]: New session 13 of user core. Nov 4 04:59:46.581687 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 04:59:46.706737 sshd[4894]: Connection closed by 147.75.109.163 port 57884 Nov 4 04:59:46.707378 sshd-session[4891]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:46.715026 systemd[1]: sshd@12-164.92.104.185:22-147.75.109.163:57884.service: Deactivated successfully. Nov 4 04:59:46.717502 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 04:59:46.720093 systemd-logind[1566]: Session 13 logged out. Waiting for processes to exit. Nov 4 04:59:46.724306 systemd-logind[1566]: Removed session 13. Nov 4 04:59:47.900753 kubelet[2748]: E1104 04:59:47.900649 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:47.902711 kubelet[2748]: E1104 04:59:47.902633 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b455dd7c6-8tlqq" podUID="222bf072-72e8-4f95-b557-9dabd6a2bea1" Nov 4 04:59:51.731556 systemd[1]: Started sshd@13-164.92.104.185:22-147.75.109.163:35642.service - OpenSSH per-connection server daemon (147.75.109.163:35642). Nov 4 04:59:51.865823 sshd[4939]: Accepted publickey for core from 147.75.109.163 port 35642 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 04:59:51.871735 sshd-session[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:51.878189 systemd-logind[1566]: New session 14 of user core. Nov 4 04:59:51.883652 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 04:59:52.041421 sshd[4942]: Connection closed by 147.75.109.163 port 35642 Nov 4 04:59:52.040980 sshd-session[4939]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:52.053956 systemd[1]: sshd@13-164.92.104.185:22-147.75.109.163:35642.service: Deactivated successfully. Nov 4 04:59:52.057367 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 04:59:52.059610 systemd-logind[1566]: Session 14 logged out. Waiting for processes to exit. Nov 4 04:59:52.063205 systemd-logind[1566]: Removed session 14. Nov 4 04:59:52.902089 containerd[1595]: time="2025-11-04T04:59:52.901793048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 04:59:53.226476 containerd[1595]: time="2025-11-04T04:59:53.226430653Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:53.227168 containerd[1595]: time="2025-11-04T04:59:53.227126302Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 04:59:53.227439 containerd[1595]: time="2025-11-04T04:59:53.227225820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:53.228874 kubelet[2748]: E1104 04:59:53.227367 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:59:53.228874 kubelet[2748]: E1104 04:59:53.227567 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:59:53.228874 kubelet[2748]: E1104 04:59:53.227692 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c2e3b148ce3f482bae29904bdedc5907,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cs49m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f8f9884d7-2qd2d_calico-system(d8570cf7-4cce-4759-8eb4-4f57fafd9490): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:53.231600 containerd[1595]: time="2025-11-04T04:59:53.231553149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 04:59:53.557619 containerd[1595]: time="2025-11-04T04:59:53.557202056Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:53.559551 containerd[1595]: time="2025-11-04T04:59:53.558055284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:53.559551 containerd[1595]: time="2025-11-04T04:59:53.558115854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 04:59:53.559649 kubelet[2748]: E1104 04:59:53.559099 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:59:53.559649 kubelet[2748]: E1104 04:59:53.559153 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:59:53.559649 kubelet[2748]: E1104 04:59:53.559264 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cs49m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f8f9884d7-2qd2d_calico-system(d8570cf7-4cce-4759-8eb4-4f57fafd9490): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:53.560494 kubelet[2748]: E1104 04:59:53.560424 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f8f9884d7-2qd2d" podUID="d8570cf7-4cce-4759-8eb4-4f57fafd9490" Nov 4 04:59:54.902444 containerd[1595]: time="2025-11-04T04:59:54.901177830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:59:55.255483 containerd[1595]: time="2025-11-04T04:59:55.255311165Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:55.256681 containerd[1595]: time="2025-11-04T04:59:55.256549485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:59:55.256681 containerd[1595]: time="2025-11-04T04:59:55.256602290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:55.257074 kubelet[2748]: E1104 04:59:55.256917 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:55.257618 kubelet[2748]: E1104 04:59:55.257071 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:55.258135 kubelet[2748]: E1104 04:59:55.257766 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24rgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64fff7f795-8w5t9_calico-apiserver(c80a2d93-8040-43fb-ae27-fda397ce6d05): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:55.259180 kubelet[2748]: E1104 04:59:55.259127 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fff7f795-8w5t9" podUID="c80a2d93-8040-43fb-ae27-fda397ce6d05" Nov 4 04:59:56.900641 kubelet[2748]: E1104 04:59:56.900005 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 04:59:57.061777 systemd[1]: Started sshd@14-164.92.104.185:22-147.75.109.163:35648.service - OpenSSH per-connection server daemon (147.75.109.163:35648). Nov 4 04:59:57.186524 sshd[4955]: Accepted publickey for core from 147.75.109.163 port 35648 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 04:59:57.190059 sshd-session[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:57.199168 systemd-logind[1566]: New session 15 of user core. Nov 4 04:59:57.207562 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 04:59:57.383465 sshd[4958]: Connection closed by 147.75.109.163 port 35648 Nov 4 04:59:57.383828 sshd-session[4955]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:57.392024 systemd-logind[1566]: Session 15 logged out. Waiting for processes to exit. Nov 4 04:59:57.393684 systemd[1]: sshd@14-164.92.104.185:22-147.75.109.163:35648.service: Deactivated successfully. Nov 4 04:59:57.397570 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 04:59:57.402328 systemd-logind[1566]: Removed session 15. Nov 4 04:59:58.900849 containerd[1595]: time="2025-11-04T04:59:58.900498894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 04:59:59.257549 containerd[1595]: time="2025-11-04T04:59:59.257431774Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:59.258788 containerd[1595]: time="2025-11-04T04:59:59.258727274Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 04:59:59.258924 containerd[1595]: time="2025-11-04T04:59:59.258822375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:59.259175 kubelet[2748]: E1104 04:59:59.259122 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:59:59.259518 kubelet[2748]: E1104 04:59:59.259185 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:59:59.259809 kubelet[2748]: E1104 04:59:59.259759 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hgb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b455dd7c6-8tlqq_calico-system(222bf072-72e8-4f95-b557-9dabd6a2bea1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:59.261272 kubelet[2748]: E1104 04:59:59.261238 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b455dd7c6-8tlqq" podUID="222bf072-72e8-4f95-b557-9dabd6a2bea1" Nov 4 04:59:59.261622 containerd[1595]: time="2025-11-04T04:59:59.261525978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:59:59.576116 containerd[1595]: time="2025-11-04T04:59:59.575515279Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:59.579030 containerd[1595]: time="2025-11-04T04:59:59.578891446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:59:59.579030 containerd[1595]: time="2025-11-04T04:59:59.579002149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:59.579667 kubelet[2748]: E1104 04:59:59.579612 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:59.579818 kubelet[2748]: E1104 04:59:59.579760 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:59:59.580023 kubelet[2748]: E1104 04:59:59.579974 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x9xrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64fff7f795-2rvd7_calico-apiserver(58ac8885-f887-4233-b0b8-becfde233cd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:59.581345 kubelet[2748]: E1104 04:59:59.581272 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fff7f795-2rvd7" podUID="58ac8885-f887-4233-b0b8-becfde233cd2" Nov 4 04:59:59.912889 containerd[1595]: time="2025-11-04T04:59:59.911900196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 04:59:59.913656 kubelet[2748]: E1104 04:59:59.913439 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 05:00:00.252717 containerd[1595]: time="2025-11-04T05:00:00.252639005Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:00:00.253918 containerd[1595]: time="2025-11-04T05:00:00.253829268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 05:00:00.254533 containerd[1595]: time="2025-11-04T05:00:00.253986275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 05:00:00.254617 kubelet[2748]: E1104 05:00:00.254225 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 05:00:00.254617 kubelet[2748]: E1104 05:00:00.254306 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 05:00:00.254762 kubelet[2748]: E1104 05:00:00.254611 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptmjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vv5n9_calico-system(092f500e-4822-4935-b64c-fa41aafe316d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 05:00:00.255842 containerd[1595]: time="2025-11-04T05:00:00.255764923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 05:00:01.079762 containerd[1595]: time="2025-11-04T05:00:01.079631917Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:00:01.080747 containerd[1595]: time="2025-11-04T05:00:01.080663567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 05:00:01.080842 containerd[1595]: time="2025-11-04T05:00:01.080808103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 05:00:01.082496 kubelet[2748]: E1104 05:00:01.080981 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 05:00:01.083794 kubelet[2748]: E1104 05:00:01.082532 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 05:00:01.083794 kubelet[2748]: E1104 05:00:01.082919 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plp4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-psj7t_calico-system(a786dc4c-0c1b-411d-9e1c-798267553660): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 05:00:01.087180 kubelet[2748]: E1104 05:00:01.084623 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-psj7t" podUID="a786dc4c-0c1b-411d-9e1c-798267553660" Nov 4 05:00:01.087342 containerd[1595]: time="2025-11-04T05:00:01.084796932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 05:00:02.419317 systemd[1]: Started sshd@15-164.92.104.185:22-147.75.109.163:34754.service - OpenSSH per-connection server daemon (147.75.109.163:34754). Nov 4 05:00:02.601206 sshd[4971]: Accepted publickey for core from 147.75.109.163 port 34754 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 05:00:02.608391 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:02.626717 systemd-logind[1566]: New session 16 of user core. Nov 4 05:00:02.641351 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 05:00:02.939441 sshd[4974]: Connection closed by 147.75.109.163 port 34754 Nov 4 05:00:02.938664 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:02.963116 systemd[1]: sshd@15-164.92.104.185:22-147.75.109.163:34754.service: Deactivated successfully. Nov 4 05:00:02.972011 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 05:00:02.979875 systemd-logind[1566]: Session 16 logged out. Waiting for processes to exit. Nov 4 05:00:02.991667 systemd-logind[1566]: Removed session 16. Nov 4 05:00:02.995157 systemd[1]: Started sshd@16-164.92.104.185:22-147.75.109.163:34760.service - OpenSSH per-connection server daemon (147.75.109.163:34760). Nov 4 05:00:03.182741 sshd[4986]: Accepted publickey for core from 147.75.109.163 port 34760 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 05:00:03.186814 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:03.199995 systemd-logind[1566]: New session 17 of user core. Nov 4 05:00:03.209505 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 05:00:03.945748 sshd[4989]: Connection closed by 147.75.109.163 port 34760 Nov 4 05:00:03.950715 sshd-session[4986]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:03.978752 systemd[1]: Started sshd@17-164.92.104.185:22-147.75.109.163:34774.service - OpenSSH per-connection server daemon (147.75.109.163:34774). Nov 4 05:00:03.988554 containerd[1595]: time="2025-11-04T05:00:03.987191548Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:00:04.000795 systemd[1]: sshd@16-164.92.104.185:22-147.75.109.163:34760.service: Deactivated successfully. Nov 4 05:00:04.006660 containerd[1595]: time="2025-11-04T05:00:04.006265604Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 05:00:04.009501 containerd[1595]: time="2025-11-04T05:00:04.007597095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 05:00:04.015529 kubelet[2748]: E1104 05:00:04.011592 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 05:00:04.015529 kubelet[2748]: E1104 05:00:04.011688 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 05:00:04.015529 kubelet[2748]: E1104 05:00:04.011910 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptmjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vv5n9_calico-system(092f500e-4822-4935-b64c-fa41aafe316d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 05:00:04.013953 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 05:00:04.017738 kubelet[2748]: E1104 05:00:04.015856 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vv5n9" podUID="092f500e-4822-4935-b64c-fa41aafe316d" Nov 4 05:00:04.019460 systemd-logind[1566]: Session 17 logged out. Waiting for processes to exit. Nov 4 05:00:04.031806 systemd-logind[1566]: Removed session 17. Nov 4 05:00:04.233243 sshd[4996]: Accepted publickey for core from 147.75.109.163 port 34774 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 05:00:04.244216 sshd-session[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:04.265763 systemd-logind[1566]: New session 18 of user core. Nov 4 05:00:04.274652 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 05:00:04.906434 kubelet[2748]: E1104 05:00:04.905677 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f8f9884d7-2qd2d" podUID="d8570cf7-4cce-4759-8eb4-4f57fafd9490" Nov 4 05:00:05.727193 sshd[5002]: Connection closed by 147.75.109.163 port 34774 Nov 4 05:00:05.728767 sshd-session[4996]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:05.752987 systemd[1]: sshd@17-164.92.104.185:22-147.75.109.163:34774.service: Deactivated successfully. Nov 4 05:00:05.760675 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 05:00:05.764154 systemd-logind[1566]: Session 18 logged out. Waiting for processes to exit. Nov 4 05:00:05.772558 systemd-logind[1566]: Removed session 18. Nov 4 05:00:05.779768 systemd[1]: Started sshd@18-164.92.104.185:22-147.75.109.163:34790.service - OpenSSH per-connection server daemon (147.75.109.163:34790). Nov 4 05:00:05.987505 sshd[5018]: Accepted publickey for core from 147.75.109.163 port 34790 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 05:00:05.992128 sshd-session[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:06.006876 systemd-logind[1566]: New session 19 of user core. Nov 4 05:00:06.014771 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 05:00:06.749649 sshd[5022]: Connection closed by 147.75.109.163 port 34790 Nov 4 05:00:06.749745 sshd-session[5018]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:06.772245 systemd[1]: sshd@18-164.92.104.185:22-147.75.109.163:34790.service: Deactivated successfully. Nov 4 05:00:06.782294 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 05:00:06.790536 systemd-logind[1566]: Session 19 logged out. Waiting for processes to exit. Nov 4 05:00:06.795487 systemd-logind[1566]: Removed session 19. Nov 4 05:00:06.799903 systemd[1]: Started sshd@19-164.92.104.185:22-147.75.109.163:34796.service - OpenSSH per-connection server daemon (147.75.109.163:34796). Nov 4 05:00:06.978160 sshd[5034]: Accepted publickey for core from 147.75.109.163 port 34796 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 05:00:06.981889 sshd-session[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:06.997902 systemd-logind[1566]: New session 20 of user core. Nov 4 05:00:07.012031 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 05:00:07.230740 sshd[5038]: Connection closed by 147.75.109.163 port 34796 Nov 4 05:00:07.233078 sshd-session[5034]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:07.243292 systemd[1]: sshd@19-164.92.104.185:22-147.75.109.163:34796.service: Deactivated successfully. Nov 4 05:00:07.250227 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 05:00:07.255712 systemd-logind[1566]: Session 20 logged out. Waiting for processes to exit. Nov 4 05:00:07.259039 systemd-logind[1566]: Removed session 20. Nov 4 05:00:09.907660 kubelet[2748]: E1104 05:00:09.907569 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b455dd7c6-8tlqq" podUID="222bf072-72e8-4f95-b557-9dabd6a2bea1" Nov 4 05:00:10.901816 kubelet[2748]: E1104 05:00:10.900272 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fff7f795-8w5t9" podUID="c80a2d93-8040-43fb-ae27-fda397ce6d05" Nov 4 05:00:10.901816 kubelet[2748]: E1104 05:00:10.900729 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fff7f795-2rvd7" podUID="58ac8885-f887-4233-b0b8-becfde233cd2" Nov 4 05:00:12.249869 systemd[1]: Started sshd@20-164.92.104.185:22-147.75.109.163:46918.service - OpenSSH per-connection server daemon (147.75.109.163:46918). Nov 4 05:00:12.372077 sshd[5051]: Accepted publickey for core from 147.75.109.163 port 46918 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 05:00:12.375622 sshd-session[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:12.383946 systemd-logind[1566]: New session 21 of user core. Nov 4 05:00:12.392680 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 05:00:12.633786 sshd[5054]: Connection closed by 147.75.109.163 port 46918 Nov 4 05:00:12.634505 sshd-session[5051]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:12.639964 systemd-logind[1566]: Session 21 logged out. Waiting for processes to exit. Nov 4 05:00:12.644997 systemd[1]: sshd@20-164.92.104.185:22-147.75.109.163:46918.service: Deactivated successfully. Nov 4 05:00:12.648771 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 05:00:12.654308 systemd-logind[1566]: Removed session 21. Nov 4 05:00:13.901419 kubelet[2748]: E1104 05:00:13.900940 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-psj7t" podUID="a786dc4c-0c1b-411d-9e1c-798267553660" Nov 4 05:00:14.900950 kubelet[2748]: E1104 05:00:14.900884 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vv5n9" podUID="092f500e-4822-4935-b64c-fa41aafe316d" Nov 4 05:00:17.655870 systemd[1]: Started sshd@21-164.92.104.185:22-147.75.109.163:46922.service - OpenSSH per-connection server daemon (147.75.109.163:46922). Nov 4 05:00:17.724566 sshd[5068]: Accepted publickey for core from 147.75.109.163 port 46922 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 05:00:17.726898 sshd-session[5068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:17.737012 systemd-logind[1566]: New session 22 of user core. Nov 4 05:00:17.744383 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 05:00:17.950087 sshd[5071]: Connection closed by 147.75.109.163 port 46922 Nov 4 05:00:17.951069 sshd-session[5068]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:17.958572 systemd[1]: sshd@21-164.92.104.185:22-147.75.109.163:46922.service: Deactivated successfully. Nov 4 05:00:17.960112 systemd-logind[1566]: Session 22 logged out. Waiting for processes to exit. Nov 4 05:00:17.963177 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 05:00:17.969825 systemd-logind[1566]: Removed session 22. Nov 4 05:00:19.901512 kubelet[2748]: E1104 05:00:19.900822 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 05:00:19.903850 kubelet[2748]: E1104 05:00:19.903793 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f8f9884d7-2qd2d" podUID="d8570cf7-4cce-4759-8eb4-4f57fafd9490" Nov 4 05:00:22.900501 kubelet[2748]: E1104 05:00:22.900434 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fff7f795-2rvd7" podUID="58ac8885-f887-4233-b0b8-becfde233cd2" Nov 4 05:00:22.975319 systemd[1]: Started sshd@22-164.92.104.185:22-147.75.109.163:42176.service - OpenSSH per-connection server daemon (147.75.109.163:42176). Nov 4 05:00:23.088808 sshd[5106]: Accepted publickey for core from 147.75.109.163 port 42176 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 05:00:23.091813 sshd-session[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:23.100640 systemd-logind[1566]: New session 23 of user core. Nov 4 05:00:23.105666 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 05:00:23.350222 sshd[5109]: Connection closed by 147.75.109.163 port 42176 Nov 4 05:00:23.350864 sshd-session[5106]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:23.354739 systemd[1]: sshd@22-164.92.104.185:22-147.75.109.163:42176.service: Deactivated successfully. Nov 4 05:00:23.359239 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 05:00:23.362809 systemd-logind[1566]: Session 23 logged out. Waiting for processes to exit. Nov 4 05:00:23.366461 systemd-logind[1566]: Removed session 23. Nov 4 05:00:23.901476 kubelet[2748]: E1104 05:00:23.901429 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b455dd7c6-8tlqq" podUID="222bf072-72e8-4f95-b557-9dabd6a2bea1" Nov 4 05:00:24.900543 kubelet[2748]: E1104 05:00:24.900001 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64fff7f795-8w5t9" podUID="c80a2d93-8040-43fb-ae27-fda397ce6d05" Nov 4 05:00:26.899444 kubelet[2748]: E1104 05:00:26.899172 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 05:00:26.901930 kubelet[2748]: E1104 05:00:26.901795 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-psj7t" podUID="a786dc4c-0c1b-411d-9e1c-798267553660" Nov 4 05:00:28.367891 systemd[1]: Started sshd@23-164.92.104.185:22-147.75.109.163:42178.service - OpenSSH per-connection server daemon (147.75.109.163:42178). Nov 4 05:00:28.467271 sshd[5122]: Accepted publickey for core from 147.75.109.163 port 42178 ssh2: RSA SHA256:o8fAK4OcNn4PY2CF+mKCSzj0EYuaS5VSc17a6u3duFc Nov 4 05:00:28.469800 sshd-session[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:28.479770 systemd-logind[1566]: New session 24 of user core. Nov 4 05:00:28.486697 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 4 05:00:28.734034 sshd[5125]: Connection closed by 147.75.109.163 port 42178 Nov 4 05:00:28.735616 sshd-session[5122]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:28.740704 systemd[1]: sshd@23-164.92.104.185:22-147.75.109.163:42178.service: Deactivated successfully. Nov 4 05:00:28.744075 systemd[1]: session-24.scope: Deactivated successfully. Nov 4 05:00:28.745340 systemd-logind[1566]: Session 24 logged out. Waiting for processes to exit. Nov 4 05:00:28.748864 systemd-logind[1566]: Removed session 24. Nov 4 05:00:29.907091 kubelet[2748]: E1104 05:00:29.906494 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vv5n9" podUID="092f500e-4822-4935-b64c-fa41aafe316d" Nov 4 05:00:30.899014 kubelet[2748]: E1104 05:00:30.898373 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 05:00:32.902771 kubelet[2748]: E1104 05:00:32.902688 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f8f9884d7-2qd2d" podUID="d8570cf7-4cce-4759-8eb4-4f57fafd9490"